Is The First Amendment Clear?

The First Amendment states, “Congress shall make no law respecting an establishment of religion.” Supreme Court Justice Hugo Black believed that the First Amendment requires the state to stay neutral in its relationship with religion. I disagree with Justice Black’s interpretation of the First Amendment. The First Amendment states that Congress can make no law establishing a religion. It does not say that the state may not favor a religion or it’s general philosophy. We, after all, have history of Christianity in the beliefs and practices of today. It does not say that the government cannot take part in the celebration of holidays or prayers, which is the current interpretation in our government. However, it would still be the state’s job to make sure that all citizens, especially citizens of all religions, would be treated equally. When government officials convey themselves with religious expressions or symbols, it is sometimes seen as alliance between church and state. In actuality, it is called freedom of expression. This type of favoring, if you consider it such, of religion should be allowed.

At a first look, the First Amendment appears to be written in clear terms, saying Congress shall make no law in violation of certain religious and political principles. After a closer reading, and upon more reflection, the amendment’s underlying issues rise to the surface in the form of many many arguments between political parties for centuries. What kind of law respects the establishment of religion? Does the First Amendment include only laws that would establish an official national religion, as the Anglican Church was established in England? Does it also include laws that recognize or endorse religious activities such as the celebration of Christmas? Can people even agree on what is meant by religion so that judges may know when religion is being established or when the right to its free exercise has been infringed? These questions have been a subject of gigantic controversy in the USA, moreso as time goes on.

Hugo Black served on the U.S. Supreme Court for 34 years and is considered to be one of the most influential justices of all time, even though his background and path to the Court might have made less of an impact. Roosevelt eventually nominated Black to be a supreme court justice. And when he finally made it to the Senate, Black was confirmed by a vote of 63 to 16. However, shortly after, the public learned about Black’s past as a member of the Ku Klux Klan, a violent racist organization. On the eve of taking his seat on the Supreme Court, Black went public, saying that membership in the KKK was a necessity to enter politics in the South and they assisted him in his political campaigns. Black supported a strict separation of religion and state and wrote some of very influential decisions in the area of the establishment clause, for example, Everson v. Board of Education, which incorporated the establishment clause to the states, and in Engel v. Vitale, which did not support teacher-led prayer in the public school classroom.

In the past, the state has used its power to force religions. Torcaso v. Watkins in 1961, was a famous Supreme Court Case in which, the Constitution of Maryland required a declaration of belief in the existence of God in order for a person to hold “any office of profit or trust in this State”, however, Torcaso, an atheist, refused, and his appointment was consequently revoked. Torcaso, believing his constitutional rights to freedom of religious expression had been infringed filed a lawsuit against maryland. The court ruled that declared religious tests for public office candidates as unconstitutional. This ruling was justifiable. A public office should not be limited to a person of a specific religion. Public officials represent everyone in their community, not a just a specific religious interest. In the famous case of Moore v. Glassroth, Alabama Chief Justice Roy Moore was charged with judicial ethics charges because he refused to remove the Ten Commandments monuments from a government building after being ordered to. The monument was eventually removed. I don’t agree with the ruling of this case. The Framers built their belief on Judeo-Christian philosophy. This is shown also in the fact that “Moses” and the “Ten Commandments” are depicted on the wall behind the Justices and Chief Justice in the Supreme Court. The monument wasn’t hurting anyone. It simply stated rules of morality that are believed by many cultures and many believers of different religions.

Government officials are under the rule of law; therefore they must follow the same regulations as anyone else in society. Corresponding to the rule of law, officials must also have the same rights as all other citizens. Government officials must have the right to express their beliefs and philosophies in seeking the common good. There have also been times that state power has been used to favor religions and their associations In the Supreme Court Case of Bowen v. Kendrick in 1988, a group of federal taxpayers, clergymen, and the American Jewish Congress filed suit against Otis R. Bowen, the Secretary of Health and Human Services, arguing that the Adolescent Family Life Act violated the Establishment Clause of the First Amendment. The Court allowed federal funds to support religious organizations offering counseling. This was because of the new Adolescent Family Life Act that had been put into effect. This decision was justifiable because it was not going to the organization to further the religion, but to help the community.

More recently, President Bush has created the Faith-based and Community Initiative Program. This allows faith-based institutions to vie, equally, for federal funds, and pushes for Identifying and eliminating barriers that impede the full participation of people in need in the Federal grants process, ensuring that Federally-funded social services administered by state and local governments are consistent with equal treatment provisions, and pursuing legislative efforts to extend charitable choice provisions that prevent discrimination against faith-based organizations. Finally it aims to protect the religious freedom of people who receive aid, and preserve religious hiring rights of faith-based charities. Now, churches, synagogues, mosques, and other religious groups can get federal funding without being discriminated against simply because they are a religious organization. These institutions help the community and the general welfare of its people. Just as it was in Bowen v. Kendrick, this act is reasonable because it is for the common good of the people. In conclusion, I believe that the First Amendment does not require that government to remain neutral between religions, as long as the natural and civic rights of all citizens of all religions are equally protected.

The government may favor specific religions and religious philosophies for the betterment of the welfare and morality for themselves and the constituents it seeks to serve. I disagree with Justice Black’s interpretation of the First Amendment. He once said, “I am for the First Amendment from the first word to the last. I believe it means what it says.”, however, I believe this; “Congress shall make no law respecting an establishment of religion,” is what the First Amendment says and that is exactly what it means.

2019-1-4-1546631877

Should we fight against tort reform?: essay help site:edu

The controversy around tort reform has turned into a two-sided debate between citizens and corporates. With the examination of various cases in recent years, it is clear that the effects of tort reform have proven to be negative for both sides. This issue continues to exist today, as public relations and legislature show a clear difference in opinion. In the event that tort reform occurs, victims and plaintiffs will be prevented from being fully replenished from the harm and negativity that they suffered, making this process of the civil justice system unfair.

In the justice system, there are two forms of law: criminal law, and civil law. The most well known form of law is probably criminal law. Criminal law is where the government (prosecutor) fights a defendant regarding a crime that may or may not have been committed. Contrary to this, civil law has a plaintiff and a defendant who fight over a tort. As stated in the dictionary, a tort is “a wrongful act or an infringement of a right (other than under contract) leading to civil legal liability”. In hindsence, a tort correlates to that of a crime in a criminal case.

Tort reform refers to the passing legislature or when a court issues a ruling that limits in some way the rights of an injured person to seek compensation from the person who caused the accident (“The Problems…Reform”). Tort reform also includes subtopics such as public relations campaign, caps on damages, judicial elections, and mandatory arbitration. Lawmakers across the United States have been heavily involved with tort reform since the 1950s, and it has only grown in popularity since then. Ex-president George W. Bush urged Congress to make reform in 2005 and brought tort reform to the table like no other president.

The damages that are often referred to in civil lawsuits are economic damages and non-economic damages. An economic damage is any cost that is a result of the defendant’s actions. For example, medical bills or money to repair things. Non-economic damages refer to emotional stress, post-traumatic stress disorder, and other impacts not related to money. A cap on damages “limits the amount of non-economic damage compensation that can be awarded to a plaintiff” (US Legal Inc).

Caps on damages are the most common practice of tort reform. In New Mexico, Susan Seibert says that she was hospitalized for more than nine months because of a doctor messing up during her gynecological procedure. After suing, she was supposed to receive $2.6 million in damages, which was then reduced to $600,000 because of a cap on damages. Seibert still suffers from excessive amounts of debt as a result of not being given the proper amount of money that she deserved. Caps on damages highly impacts the plaintiffs in a case. As priorly mentioned, plaintiffs sue because they need money in order to fully recover from the hardship in which they endured as a result of the defendants actions.

A type of tort reform that is not as well known is specialized medical courts. Currently, all medical malpractice courts have juries that have little to no background regarding medical information. This has been working very well because it means that an unbiased verdict is decided. However, the organization Common Good is trying to pass the creation of special medical courts. In this, the jury and judge will be trained medical professionals who will deeply evaluate the case. Advocates for this court feel that people will be better compensated for what they really deserve. However, the majority of the opinions on this court are against the idea of ths. The most concluded opinion of those who oppose this new system believe that it would put the patients at a disadvantage. It is more likely that the trained medical judges and juries will side with the doctor/surgeon/defendant than siding with the plaintiff. They believe that the most fair and efficient way to judge medical malpractice cases would be to use the existing civil justice system. One of the most famous medical malpractice cases involving Dana Carvey was ended in a settlement, but could have been much worse for Carvery if the judge and jury had been medical professionals. Carvey was receiving a double bypass and had a surgeon that operated on the wrong artery. In the event that this case went to a medical court, it is easily predictable that the verdict would have been that the doctor made a “just” mistake. The jury would have said that this mistake was nothing that was easily preventable, and it was something that could have been assumed as a risk going into the surgery. However, this case did not go to court, rather, it ended in a $7.5 million settlement.

Another form of tort reform is mandatory arbitration. Mandatory arbitration, as said in the article, “Mandatory Arbitration Agreements in Employment Contracts”, is “a contract clause that prevents a conflict from going to a judicial court”. This has affected many employers who have experienced sexual harassment, stealing of wages, racial discrimination, and more. Often times, “employees signed so-called mandatory arbitration agreements that are the new normal in American workplaces” (Campbell). These agreements are found under stacks of thousands of papers that have to be signed throughout the hiring process. The manager will force the new employee to sign these documents. Most of the time, these documents will not be called “Mandatory Arbitration Agreement”, rather, they could be called legalese names like “Alternative Dispute Resolution Agreement” (Campbell). “Between employee and employer, this means that any conflict must be solved through arbitration” (“Mandatory Arbitration Agreements in Employment Contracts”). When a conflict is solved through arbitration, “neutral arbiters” go through the evidence that the company/client present, and those arbiters decide what they think the just outcome should be, whether that is money, loss of a job, and more. This decision is known to be called the arbitration award.

A place where the effects of mandatory arbitration can be seen is the #MeToo movement. With the rise of this moment, more and more women have been coming out about their experiences with sexual harassment in the workplace. These women are then encouraged to fight against their harasser. Ultimately, many of these woman find out that they are not allowed to sue because of the mandatory arbitration agreements that they signed during the process of being hired into the job. In fact, Debra S. Katz wrote an article for The Washington Post called “30 million women can’t sue their employer over harassment”, proving how widespread the issue is. Evidently, this form of tort reform ruins the lives of over 30 million people annually. These woman could be suffering from post traumatic stress disorder, truma, and more from their experiences with sexual harassment. In the event that this form of tort reform is not banished, more and more woman will be suffering from mandatory arbitration.

By limiting the amount of money and reparations that a defendant will have to pay a plaintiff, tort reforms benefit major corporations. However, on the opposite side of this, the plaintiff suffers extremely from these limitations. In many cases, a plaintiff will be suing because they need the money to recover fully from the event that took place. For example, in the documentary “Hot Coffee”, many tort cases were discussed. Throughout the cases, there were occurrences in which the plaintiff suffered from the current regulations regarding caps, mandatory arbitration, and more. Tort reform would further exacerbate the negatives of modern day civil court cases.

Groups such as the American Tort Reform Association (ATRA) and Citizens Against Law Abuse (CALA) have also been active in fighting for tort reform. Along with these suspicions, other issues with tort reform such as the fairness behind caps on damages have exposed inequity in the civil justice system. Supporters of tort reform have been rallying for a common goal: to limit the ability of citizens to take advantage of the litigation process to protect businesses and companies.

Victims and plaintiffs will be prevented from receiving the reparations that they deserve as a result of hardship, negativity, and suffrage from the defendant’s actions in the event that tort reform occurs. Caps on damages, special medical malpractice courts, and mandatory arbitration are just a few of the negative impacts that tort reform will allow. Victims and plaintiffs sue the defendant to be able to receive the full compensation that they deserve. It is hard enough as it is to fight against these major corporations, and tort reform would further exacerbate that. Americans have the right to a fair trial, and the implication of tort reform would take away that constitutionally given right. It is essential that Americans continue to fight against tort reform, as you never know if you may become the next victim.

2019-1-6-1546809813

Chinese suppression of Hong Kong

Would you fight for democracy? Its core principles are the beating heart of our society: providing us with representation, civil rights and freedom — empowering our nation to be just and egalitarian. However, whilst we cherish our flourishing democracy, we have blatantly ignored one of the most portentous democratic crises of our time. The protests in Hong Kong. Sparked by a proposed bill allowing extradition to mainland China, the protests have ignited the city’s desire for freedom, democracy and autonomy; and they have blazed into a broad pro-democracy movement, opposing Beijing’s callous and covert campaign to suppress legal rights in Hong Kong. But the spontaneity fueling these protests is fizzling out, as minor concessions fracture the leaderless movement. Without external assistance, this revolutionary campaign could come to nothing. Now, we, the West, must support protesters to fulfill our legal and moral obligations, and to safeguard other societies from the oppression Hong Kongers are suffering. The Chinese suppression of Hong Kong must be stopped.

Of all China’s crimes, its flagrant disregard for Hong Kong’s constitution is the most alarming. When Hong Kong was returned to China in 1997, the British and Chinese governments signed the Sino-Brititish Joint Declaration, allowing Hong Kong “a high degree of autonomy, except in foreign and defence affairs” until 2047. This is allegedly achieved through the “one country, two systems” model, currently implemented in Hong Kong. Nevertheless, the Chinese government — especially since Xi Jinpin seized power in 2013 — is relentlessly continuing to erode legal rights in our former colony. For instance, in 2016, four pro-democracy lawmakers — despite being democratically elected — were disqualified from office. Amid the controversy surrounding the ruling lurked Beijing, using its invisible hand to crush the opposition posed by the lawmakers. However, it is China’s perversion of Hong Kong’s constitution, the Basic Law, that has the most pronounced and crippling effect upon the city. The Basic Law requires Hong Kong’s leader to be chosen “by universal suffrage upon nomination by a broadly representative nominating committee”; but this is strikingly disparate to reality. Less than seven percent of the electoral register are allowed to vote for representatives in the Election Committee — who actually choose Hong Kong’s leader — and no elections are held for vast swathes of seats, which are thus dominated by pro-Beijing officials. Is this really “universal suffrage”? Or a “broadly representative” committee? This “pseudo-democracy” is unquestionably a blatant violation of our agreement with China. If we continue to ignore the subversion of the fundamental constitution holding Hong Kong together, China’s grasp over a supposedly “autonomous” city will only strengthen. It is our legal duty to hold Beijing to account for these heinous contraventions of both Hong Kong’s constitution and the Joint Declaration — which China purports to uphold. Such despicable and brazen actions, whatever the pretence, cannot be allowed to continue.

The encroachment of their fundamental human rights is yet another travesty. Over the past few years, the Chinese government has been furtively extending its control over Hong Kong. Once, Hong Kongers enjoyed numerous freedoms and rights; now, they silently suffer. Beijing has an increasingly pervasive presence in Hong Kong, and, emboldened by a lack of opposition, it is beginning to repress anti-chinese views. For example, five booksellers, associated with one Hong Kong publishing house, disappeared in late 2015. The reason? The publishing house was printing a book — which is legal in Hong Kong — regarding the love-life of the Chinese president Xi Jinpin. None of the five men were guilty; all five men later appeared in custody in mainland China. One man even confessed on state television, obviously under duress, to an obscure crime he “committed” over a decade ago. This has cast a climate of paranoia over the city, which is already forcing artists to self-censor for fear of Chinese retaliation; if left unchecked, this erosion of free speech and expression will only worsen. Hong Kongers now live with uncertainty as to whether their views are “right” or “wrong”; is this morally acceptable to us? Such obvious infringements of rights to free speech are clear contraventions of the core human rights of people in Hong Kong. Furthermore, this crisis has escalated with the protests, entangling violence in the political confrontations. Police have indiscriminately used force to suppress both peaceful and violent protesters, with Amnesty International reporting “Hongkongers’ human rights situation has violations on almost every front”. The Chinese government is certainly behind the police’s ruthless response to protesters, manipulating its pawns in Hong Kong to quell dissent. This use of force cannot be tolerated; it is a barefaced oppression of a people who simply desire freedom, rights and democracy and it contradicts every principle that our society is founded upon. If we continue abdicating responsibility for holding Beijing to account, who knows how far this crisis will deteriorate? Beijing’s oppression of Hong Kongers’ human rights will not disappear. Britain — as a UN member, former sovereignty of Hong Kong and advocate for human rights — must make a stand with the protesters, who embody the principles of our country in its former colony.

Moreover, if we do not respond to these atrocities, tyrants elsewhere will only be emboldened to further strengthen their regimes. Oligarchs, autocrats and dictators are prevalent in our world today, with millions of people oppressed by totalitarian states. For instance, in India, the Hindu nationalist government, headed by Narendra Modi, unequivocally tyrannize the people of Kashmir: severing connections to the internet, unlawfully detaining thousands of people and reportedly torturing dissidents. The sheer depravity of these atrocities is abhorrent. And the West’s reaction to these barbarities? We have lauded and extolled Modi as, in the words then-president Barack Obama, “India’s reformer in chief”, apathetic to the outrages enacted by his government. This exemplifies our seeming lack of concern for other authoritarian regimes around the world: from our passivity towards the Saudi Arabian royal family’s oppressive oligarchy to our unconcern about the devilish dictatorship of President Erdoğan in Turkey. Our hypocrisy is irrefutable; this needs to change. The struggle in Hong Kong is a critical turning point in our battle against such totalitarian states. If we remain complacent, China will thwart the pro-democracy movement and Beijing will continue to subjugate Hong Kong unabashed. Consequently, tyrants worldwide will be emboldened to tighten their iron fists, furthering the repression of their peoples. But, if we support the protesters, we can institute a true democracy in Hong Kong. Thus, we will set a precedent for future democracies facing such turbulent struggles in totalitarian states, establishing an enduring stance for Western democracies to defend. But to achieve this, we must act decisively and immediately to politically pressure Beijing to make concessions, in order to create a truly autonomous Hong Kong.

Of course, the Chinese government is trying to excuse their actions. They claim to be merely maintaining order in a city of their country, while Western powers fuel protests in Hong Kong. Such fabrications from Chinese spin-doctors are obviously propaganda. There is absolutely no evidence to corroborate their claim of “foreign agents” sparking violence in Hong Kong. And, whilst some protesters are employing aggressive tactics, their actions are justified: peaceful protests in the past, such as the Umbrella Movement of 2014, yielded no meaningful change. Protesters are being forced to violence by Beijing, who are stubborn to propose any meaningful reforms.

Now, we face a decision, one which will have profound and far-reaching repercussions for all of humanity. Do we ignore the egregious crimes of the Chinese government, and in our complacency embolden tyrants worldwide? Or do we fight? Hong Kongers are enduring restricted freedoms, persecution and a perversion of their constitution; we must oppose this oppression resolutely. Is it our duty to support the protesters? Or, is democracy not worth fighting for?

2019-10-11-1570808349

Occurrence and prevalence of zoonoses in urban wildlife: essay help online free

A zoonosis is a disease that can be transmitted from animals to humans. Zoonoses in companion animals are known and described extensively. A lot of research has already been done, Rijks et al (2015) for example lists the 15 diseases of prime public health relevance, economical importance or both (Rijks(1)). Sterneberg-van der Maaten et al (2015) composed a list of the 15 priority zoonotic pathogens, which includes the rabies virus, Echinococcus granulosus, Toxocara canis/cati and Bartonella henselae (Sterneberg-van der Maaten(2)).

Although the research is extensive the knowledge about zoonoses and hygiene instruction of owners, health professionals and other related professions, like pet shop employees, is low. According to Van Dam et al (2016) (3)77% of the pet shop employees does not know what a zoonosis is and just 40% of the pet shops has a protocol for hygiene and disease prevention. 27% of the pet shops and asylums give instruction to their clients about zoonoses. It may therefore be assumed that the majority of the public is unaware of the health risks involving companion animals like cats and dogs. Veterinarians give information about responsible pet ownership and the risks when the pet owner visits the clinic (Van dam(3), Overgaauw (4)). In other words, dissemination obtained from research has not occurred effectively.

However, urban areas are not only populated with domestic animals. There is also a variety of non- domesticated animals living in close vicinity of domesticated animals and the human population, the so-called the urban wildlife. Urban wildlife is defined as any animal that has not been domesticated or tamed and lives or thrives in an urban environment (freedictionary(5)). Just like companion animals, urban wildlife carries pathogen that are zoonotic, for example Echinococcus multilocularis. This is a parasite that can be transmitted from foxes to humans. Another example is the rabies virus, which is transmitted by hedgehogs and bats. Some zoonotic diseases can be transmitted to humans from different animals. Q-fever occurs in mice, foxes, rabbits and sometimes even in companion animals.

There is little knowledge about the risk factors that influence the transmission of zoonoses in urban areas (Mackenstedt(6)). This is mostly due to the lack of active surveillance of carrier animals. This surveillance requires fieldwork, which is expensive and time-consuming. Often there is no immediate result for public-health authorities. This is why surveillance often is initiated during or after an epidemic (Heyman(7)). Meredith et al (2015) mentioned that due to the unavailability of a reliable serological test, for many species it is not yet know what the contribution is to the transmission to human (Meredith(8)).

The general public living in urban areas is largely unaware of the diseases transmitted from the urban wildlife that is present in their living area (Himsworth(9)), (Heyman(7)), (Dobay(10)), (Meredith(8)). Since all these diseases can also be a risk for the public health and the public may need to be informed of these risks.

The aim of this study is to determine the occurrence and prevalence of zoonoses in urban wildlife. To do this, the ecological structure of an European city will be investigated first, to determine wildlife living in the urban areas. Secondly, an overview of the most common and important zoonoses in companion animals will be discussed. Followed by zoonoses in urban wildlife.

2. Literature review

2.1 Ecological structure of the city

Humans and animals live closely together in cities. Both companion animals and urban wildlife share the environment with humans. Companion animals are important to human society. They perform working roles (dogs for hearing of visually impaired people) and they play a role in human health and childhood development (Day(11)).

A distinction can be made between animals that live in the inner city and animals that live in the outskirts of the city. The animals that live in the majority of the European inner cities are: brown rats, house mice, bats, rabbits and different species of birds. Those living outside of the stone inner city are other species of mice, hedgehogs, foxes and moles (Auke Brouwer(12)). In order to create safe passage for this particular group of animals, ecological structures are created. The structure also includes wet passageways for amphibia and snakes and dry passageways like underground tunnels, special bridges and cattle grids (Spier M(13)).

A disadvantage of human and animals living in close vicinity of each other is the possibility of transmitting diseases (Auke Brouwer(12)). Diseases can be transmitted from animals to humans in different ways. A few examples are: through eating infected food, inhalation of aerosols, via vectors or fecal-oral contact (WUR(14)). The most relevant ways of transmission for this review are: indirect physical contact (e.g. contact with contaminated surface), direct physical contact (touching an infected person or animal), through skin lesions, fecal-oral transmission and airborne transmission (aerosols). In the following section an overview of significant zoonoses of companion animals will be described. This information will enable a comparison with urban wildlife zoonoses later in this review.

2.2 Zoonoses of cats and dogs

There are many animals living in European cities. Both companion animals and urban wildlife. 55- 59% of the Dutch households has one or more companion animals (van Dam(3)). This includes approximately 2 million dogs and 3 million cats (RIVM(15)). In all of Europe live approximately 61 million dogs and 66 million cats. Owning a pet has many advantages, but companion animals are also able to transmit diseases to humans (Day(11)). In the following section significant zoonoses for companion animals will be described.

A. Bartonellosis (cat scratch disease)

Bartonellosis is an infection by Bartonella henselae or B. clarridgeiae. Most infections in cats are thought to be subclinical. If disease does occur, the symptoms are mild and self-limiting, characterized by lethargy, fever, gingivitis, uveitis and nonspecific neurological signs (Weese JS(16)). The seroprevalence in cats is 81% (barmettler(17)).

Humans get infected by scratches or bites and sometimes by infected fleas and ticks. In the vast majority of cases, the infection is also mild and self-limiting. The clinical signs in humans include development of a papule at the site of inoculation, followed by regional lymphadenopathy and mild fever, generalized myalgia and malaise. This usually resolves spontaneously over a period of weeks to months (Weese JS(16)).

Few cases of human bartonella occur in The Netherlands. Based on laboratory diagnosis done by the RIVM, the bacteria causes 2 cases per 100.000 humans each year. However, this could be ten times higher, since the disease is mild and self-limiting most of the time, so most people do not visit a health care professional (RIVM(18)).

B. Leptospirosis

This disease is caused by the bacteria Leptospira interrogans. According to Weese et al (2002) leptospirosis is the most widespread zoonotic disease in the world. The bacteria can infect a wide range of animals (Weese(16)).

Leptospirosis is in dogs and cats a relatively small zoonosis. It is not know exactly how many dogs are infected annually subclinically or asymptomatically, but according to Houwers et al (2009), each year around 10 cases occur in The Netherlands (Houwers(19)). RIVM states that each year 0,2 cases per 100.000 humans occur (RIVM(20)).

Infection in dogs is called Weill’s disease. Clinical signs can be peracute, acute, subacute and chronic. A peracute infection usually results in in sudden death with few clinical signs. Dogs with an acute infection are icteric, have diarrhea, vomit and may experience peripheral vascular collapse. The subacute form is generally manifested as fever, vomiting, anorexia, polydipsia, dehydration and in some cases severe renal disease can develop. Symptoms of a chronical infections are: fever of unknown origin, unexplained renal failure, or hepatic disease and anterior uveitis. The majority of infections in dogs are subclinical or chronic. In cats clinical disease is infrequent (Weese(16)).

According to Barmettler et al (2011), the risk of transmission of Leptospira from dogs to humans is just theoretical. All tested humans were exposed to infected dogs, but all were seronegative to the bacteria (Barmettler(17)).

The same bacteria that causes leptospirosis in dogs is responsible for the disease in rats, namely Leptospira interrogans. This bacteria is considered the most widespread zoonotic pathogen in the world and rats are the most common source of human infection, especially in urban areas (Himsworth(21)). According to the author, the bacteria asymptomatically colonizes the rat kidney and the rats shed the bacteria via the urine (Himsworth(9)). Bacteria can survive outside the rats for some time, especially in a warm and humid environment (RIVM(20)).

People become infected through contact with urine, or through contact with contaminated soil or water (Himsworth (21)). The Leptospira-bacteria can enter the body via the mucous or open wounds (Oomen(22)). The symptoms and severity of disease can be highly variable, ranging from asymptomatic to sepsis and death. Common complaints are: headache, nausea, myalgia and vomiting. Moreover, neurologic, cardiac, respiratory, ocular and gastrointestinal manifestations can occur (Weese JS(16)).

The prevalence in rats differs between cities and even between locations in the same city. Himsworth (2013) states that in Vancouver 11% of the tested rats was positive for Leptospira (Himsworth(9)). Another study by Easterbrook (2007) found 65,3% of all tested rats in Baltimore to be positive for the bacteria (Easterbrook(23)). Krojgaard (2009) found a prevalence between 48% and 89% in different location in Copenhagen (Krojgaard(24)).

C. Dermatophytosis (ringworm)

Dermatophytosis is a fungal dermatologic disease, caused by Microsporum spp. or Trichophyton spp. It causes disease in a variety of animals (Weese(16)). According to Kraemer (2012), the dermatophytes that occur in rabbits are Trichophyton mentagrophytes and Microsporum canis. Although the former is more common(Kraemer(25)).

Dermatophytes live in keratin layers of the skin and cause ringworm. They depend on human or animal infection for survival. Infection occurs through direct contact between dermatophyte arthrospores and keratinocytes/hairs. Transmission through indirect contact also occurs, for example through toiletries, furniture or clothes (Donnelly(26), RIVM(18)). Animals (especially cats) can transmit M. canis infection while remaining asymptomatic (Weese JS(16)).

The symptoms in both animals and humans can vary from mild or subclinical to severe lesions similar to pemphigus foliaceus (itching, alopecia and blistering). The skin lesions develop 1-3 weeks after infection(Weese JS). Healthy, intact skin cannot be infected, but only mild damage is required to make the skin susceptible to infection. No living tissue is invaded, only the keratinized stratum corneum is colonized. However, the fungus does induce an allergic and inflammatory eczematous response in the host (Donelly(26), RIVM(18)).

Dermatophytosis is not commonly occurring in humans. RIVM states that each year, 3000 per 100.000 humans get infected. Children between the age of 4 and 7 are the most susceptible to the fungal infection. In cats and dogs, the prevalence of M. canis is much higher: 23,3% according to Seebacher(27). The prevalence in rabbits is 3.3% (d’Ovidio(28)).

D. Echinococcosis

Echinococcus granulosus can be transmitted from dogs to humans. Dogs are the definitive hosts, while herbivores or humans are the intermediate hosts. Dogs can become infected by eating infected organs, for example from sheep, pigs and cattle (RIVM(29)) . The intermediate hosts develop a hydatid cyst with protoscoleces after ingesting eggs produced and excreted by definitive hosts. The protoscoleces evaginate in the small intestine and attach there(MacPherson(30)).

In most parts of Europe, Echinococcus granulosus occurs occasionally. However, in Spain, Italy, Greece, Romania and Bulgaria the bacteria is highly endemic.

Animals, either as definitive or as intermediate hosts, rarely show symptoms.

Humans, on the other hand, can show symptoms, depending on the size and site of the cyst and the growth rate. The disease can become life-threatening if a cyst in lungs or liver bursts. In that case a possible complication is an anaphylactic shock (RIVM(29)).

In the Netherlands, echinoccosis rarely occurs in humans. Between 1978 and 1991, 191 new patients were diagnosed, but it is not known how many of these were new cases. The risk of infection is higher in the case of bad hygiene and living closely together with dogs (RIVM(29)). In a study done by Fotiou et al (2012) the prevalence of Echinococcus granulosus is 1,1% (Fotiou(31)). The prevalence in dogs is much higher: 10,6% according to Barmettler et al (17).

E. Toxocariasis

Toxocariasis is caused by Toxocara canis or Toxocara cati. Toxocara is present in the intestine of 32% of all tested dogs, 39% of tested cats and 16%-26% of tested red foxes (Luty(32), LETKOVÁ(33)). In dogs younger than 6 weeks the prevalence can be up to 80% (Kantere) and in kittens of 4-6 months old it can be 64% (Luty(32)). The host becomes infected by swallowing the parasites embryonated eggs (Kantere(34)).

Dogs and red foxes are the definitive host of T. canis, cats of T. cati (Luty(32)). Humans are paratenic hosts. After ingestion, the larvae hatch in the intestine and migrate all over the body via blood vessels (visceral larva migrans). In young animals the migrations occurs via the lungs and trachea. After swallowing, the larvae mature in the intestinal tract.

In paratenic hosts and adult dogs that have some degree of acquired immunity, the larvae undergo somatic migration. There they remain as somatic larvae in the tissues. If dogs eat a Toxocara-infected paratenic host, larvae will be released and develop to adult worms in the intestinal tract (MacPherson(30)).

Humans can be infected by oral ingestion of infective eggs from contaminated soil, from unwashed hands or consumption of raw vegetables (MacPherson(30)).

The clinical symptoms in animals depend on the age of the animal and number, location and stage of development of worms. After birth, puppies can suffer from pneumonia because of tracheal migration and die in 2-3 days. 2-3 weeks after birth, puppies can show emaciation and digestive disturbance because of mature worms in the intestine and stomach. Clinical signs are: diarrhea, constipation, coughing, nasal discharge and vomiting.

Clinical symptoms in adult dogs are rare(MacPherson(30)).

In most human cases following infection by small numbers of larvae, the disease occurs without symptoms. Mostly children do get infected. VLM is mainly diagnosed in children of 1-7 years old. The symptoms can be general malaise, fever, abdominal complaints, wheezing or coughing. Severe clinical symptoms are mainly found in children of 1-3 years old.

Most of the larvae seem to be distributed to the brain and can cause neurological disease. Larvae do not migrate continuously. They rest periodically, and during such periods they induce an immunologically mediated inflammatory response (MacPherson(30)).

The prevalence in children is much lower than in adults, respectively 7% and 20%. The risk of infection with Toxocara spp. increases with bad hygiene (Overgaauw(36)). In the external environment, the eggs survive for months and consequently toxocariasis represents a significant public health risk (Kantere(34)) . High rates of soil contamination with toxocara eggs are demonstrated in parks, playgrounds, sandpits and other public places. Direct contact with infected dogs is not considered as a potential risk for human infection, because embryonation to the stage of infectivity requires a minimum of 3 weeks (MacPherson(30)).

F. Toxoplasmosis

Toxoplasmosis is caused by the protozoa Toxoplasma gondii. Cats are the definitive hosts and other animals and humans act as intermediate hosts. Infected cats excrete oocysts in the feces. These oocysts end up in the environment, where they are ingested by intermediate hosts (direct or indirect via food or water). In the intermediate hosts the protozoa migrates until it gets stuck. It is then encapsulated and stays at that place. If cats eat infected intermediate hosts they become infected.

Animals rarely show symptoms, although some young cats get diarrhea, encephalitis, hepatitis and pneumonia.

In most humans, infection is asymptomatic. Pregnant women can transmit the protozoa through the placenta and infect the unborn child. The symptoms in the child depend on the stage of pregnancy. An infection in early stages leads to severe deviations and in many cases to abortion. If the infection occurs in a later stage, premature birth is seen and symptoms of an infectious disease (fever, rash, icterus, anemia and an enlarged spleen or liver). Although, in most cases the symptoms start after birth. Most damage is done in the eyes (RIVM(37)).

Based on data of the RIVM and Overgaauw (1996) the disease that is most commonly transmitted to humans is toxoplasmosis. The prevalence was 40,5% in 1996. This number is reduced in the last few decades and Jones (2009) states that in 2009 the prevalence was 24,6% (Jones(38)). The prevalence rises with age, being 17,5% in humans younger than 20 years, and 70% in humans of 65 years and older. There is no increased risk of getting an infection if humans have a cat as a pet (RIVM(37)). Birgisdottir et al (2006) studied the prevalence in cats in Sweden, Estonia and Iceland. They found a prevalence of 54,9% , 23% and 9,8%, respectively in Estonia, Sweden and Iceland (Birgisdottir(39)).

G. Q-fever

The aetiological agent of Q-fever is the bacteria Coxiella burnetti. The bacteria has a very wide host range, including ruminants, birds and mammals such as small rodents, dogs, cats and horses. Accordingly, there is a complex reservoir system (Meredith(8)).

The extracellular form of the bacteria is very resistant, therefore it can be persistent in the environment for several weeks. It can also be spread by the wind, so direct contact with animals is not required for infection. Coxiella burnetti is found in both humans and animals in the blood, lungs, spleen, liver and during pregnancy in large quantities in the placenta and mammary glands. It is shed in urine and feces and during pregnancy in the milk (Meredith(8)).

Humans that live close to animals (like in the city) have a higher risk to get infected, since the mode of transmission is aerogenic or direct contact. The bacteria is excreted through the urine feces, placenta or amnionic fluid. After drying, it is aerogenically spread (RIVM(40)). Acute infection is characterized by atypical pneumonia and hepatitis and in some cases transient bacteraemia. The bacteria then haematogenously spreads, which results in an infection in the liver, spleen, bone marrow, reproductive tract and other organs. This is followed by the formation of granulomatous lesions in the liver and bone marrow and development of an endocarditis involving the aortic and mitral valve (Woldehiwet(41)).

On the other hand, there is little information about the clinical signs of Q fever in animals, but variable degrees of granulomatous hepatitis, pneumonia, or bronchopneumonia have been reported in mice (Woldehiwet(41)). In pregnant animals, abortion or low foetal birth weight can occur (Meredith(8), Woldehiwet(41)).

The prevalence in the overall human population in Europe is not high (2,7 %), but in risk groups like veterinarians, the prevalence can be as high as 83% (RIVM(40)).

Meredith et al, have developed a modified indirect ELISA kit adapted for use in multiple species. They tested the prevalence of C. burnetii in wild rodents (band vole, field vole and wood mouse), red foxes and domestic cats in the United Kingdom. The prevalence in the rodents was overall 17,3%. In cats it was 61.5% and in foxes 41,2% (Meredith(8)). In rabbits, the prevalence was 32,3% (González-Barrio(42)).

H. Pasteurellosis

Pasteurellosis is caused by Pasteurella multocida. This is a coccobacillus found in the oral, nasal and respiratory cavities of many species of animals (dog, cats, rabbits, etc). It is one of the most prevalent commensal and opportunistic pathogens in domestic and wild animals (Wilson(43), Giordano(44)). Human infections are associated with animal exposure, usually after animal bites or scratches (Giordano(44)). Kissing or licking of skin abrasions or mucosal surfaces of animals can also lead to infection. Transmission between animals is through direct contact with nasal secretions. (Wilson(43)).

In both animals and humans Pasteurella multocida causes chronic or acute infections that can lead to significant morbidity with symptoms of pneumonia, atrophic rhinitis, cellulitis, abscesses, dermonecrosis, meningitis and/or hemorrhagic septicaemia. In animals the mortality is significant, but not in human. This is probably due to the immediate prophylactic treatment of animal bite wounds with antibiotics. (Wilson(43))

Disease in animals appears as a chronic infection in nasal cavity, paranasal sinuses, middle ears, lacrimal and thoracic ducts of the lymph system and lungs. Primary infections with respiratory viruses or Mycoplasma species predisposes to a Pasteurella infection (Wilson(43)).

The incidence in humans is 0,19 cases per 100.000 humans (Nseir(45)). The prevalence in dogs and cats is 25-42% (Mohan(46)). The only known prevalence in rabbits is a 29,8% in laboratory animal facilities (Kawamoto(47)).

The majority of the human population lives in cities. As a result of this, in some countries the urban landscape encompasses more than half of the land surface. This leaves little space for the wildlife species living in the country. Some species are nowadays found more in urban areas than in their native environment. They have adapted to the urban ecosystems. This is a positive aspect for biodiversity in the cities. On the other hand, just like companion animals, this urban wildlife can transmit disease to humans (Dearborn(49)). In the following section, significant zoonoses of urban wildlife will be described.

A. Zoonoses of rats

The following zoonoses occur urban rats: Leptospirosis (see 2.2B) and rat bite fever.

Rat bite fever

The rat bite fever is caused by Streptobacillus moniliformis or S. minis(Chafe(50)). These bacteria are part of the normal oropharyncheal flora of the rat and it is thought to be present in rat populations worldwide.

Since the bacteria are part of the normal flora, the rats are not susceptible to the bacteria. In people, on the other hand, the bacteria can cause rat bite fever. The transmission occurs through the bite of an infected rat and through ingestion of contaminated food. The latter causes Haverhill fever.

The clinical symptoms are fever, chills, headache, vomiting, polyarthritis and skin rash. In Haverhill fever pharyngitis and vomiting may be more pronounced. If not treated, S. moniliformis infection can progress to septicemia with a mortality rate of 7-13% (Himsworth(21)).

The prevalence of Streptobacillus spp. in rats is 25% (Gaastra(51)). According to Trucksis et al (2016), rat bite fever is very rare in humans. Only a few cases each year occur (Trucksis(52)).

B. Zoonoses of mice

The zoonotical diseases that occur in mice are: hanta viruses, lymphocytic choriomeningitis, tularemia and Q-fever (see 2.2 G).

Hanta viruses

There are different types of hanta viruses, each carried by a specific rodent host species. In Europe, three types occur: Puumala virus(PUUV), carried by bank vole; Dobrava virus(DOBV), carried by yellow-necked mouse; Saaremaa virus(SAAV), carried by the striped field mouse (Heyman(7)). SAAV has been found in Estonia, Russia, South-Eastern Finland, Germany, Denmark, Slovenia and Slovakia. PUUV is very common in Finland, Northern Sweden, Estonia, the Ardennes Forest Region, parts of Germany, Slovenia and in parts of European Russia. DOBV has been found in The Balkans, Russia, Germany, Estonia and Slovakia (Heyman(7)).

Hantaviruses are transmitted via direct and indirect contact. Infective particles fare secreted in feces, urine and saliva (Kallio(53)).

The disease is asymptomatic in mice (Himsworth(21)). Humans on the other hand do get symptoms. All types of the Hanta virus cause hemorrhagic fever with renal syndrome (HFRS), but they differ in severity. HFRS is characterized by acute onset, fever, headache, abdominal pains, backache, temporary renal insufficiency and thrombocytopenia. In DOBV the extent of hemorrhages, requirement for dialysis treatment, hypotension and case-fatality rates are much higher than in PUUV or SAAV. Mortality is very low (approximately 0.1%)(Heyman(7)).

Hanta viruses are an endemic zoonosis in Europe. Tens of thousands of people get infected each year (Heyman(7)). The prevalence in mice is 9,5% (Sadkowska(54)).

Lymphocytic choriomeningitis

Lymphocytic choriomeningitis is a viral disease, caused by an arena virus (Cahfe(50)). The natural reservoirs of arenaviruses are rodent species. They are asymptomatically infected (Oldstone(55)).

In humans the disease is characterized by varying signs, from inapparent infection to the acute, fatal meningoencephalitis. The transmission of the disease is through mice bites and material contaminated with excretions and secretions of infected mice (Cahfe(50)).

The virus causes little or no toxicity to the infected cells. The disease- and associated cell and tissue injury- are caused mostly by activity of the hosts immune system. The antiviral response produces factors that act against the infected cells and damage them. Another factor is the displacement of cellular molecules that are normally attached to cellular receptors by viral proteins. This could result in conformational changes, which causes the cell membrane to become fragile and interfere with normal signalling events (Oldstone(55)).

The prevalence of lymphocytic choriomeningitis in human is 1,1 %(Lledó(56). In mice, the prevalence is 2,4% (Forbes(57)).

Tularemia

Tularemia is caused by the bacterium Franscisella tularensis. Only few animal outbreaks have been reported and so far only one outbreak in wildlife has been closely monitored(Dobay(10)). The bacteria can infect a large number of animal species. Outbreaks among mammals and human are rare. However, outbreaks can occur when the source of infection is widely spread and/or many people or animals are exposed. Outbreaks are difficult to monitor and trace, because mostly wild rodents and lagomorphs are affected (Dobay(10)).

People get infected in five ways: ingestion, direct contact with a contaminated source, inhalation, arthropod intermediates and animal bites. In animals the route of transmission is not yet known. The research of Dobay et al(2015) suggests that tularemia can cause sever outbreaks in small rodents such as house mice. The outbreak is self-exhausting in approximately three months, so no treatment is needed (Dobay(10)).

Tularemia is a potentially lethal disease. There are different clinical manifestations, depending on the route of infection. The ulceroglandular form is the most common and occurs after handling contaminated sources. The oropharyngeal form can be caused by ingestion of contaminated food or water. The pulmonary, typhoidal, glandular and ocular forms occur less frequently (Dobay(10)), Anda(58)).

In humans the symptoms of the glandular and ulceroglandular form are cervical, occipital, axillary or inguinal lymphadenopathy. The symptoms of pneumonic tularemia are fever, cough and shortness of breath (Weber(59)). Clinical manifestation of the oropharyngeal form include adenopathies on the elbow/ armpit/both, cutaneous lesions, fever, malaise, chills and shivering, painful sore throat with swollen tonsils and enlarged cervical lymph nodes (Sahn(60), Anda(58)).

The clinical features in animals are unspecific and the pathological effects vary substantially between different animal species and geographical locations. The disease can be very acute (for example in highly susceptible species like mice), with development of sepsis, liver and spleen enlargement and pinpoint white foci in the affected organs. The subacute form can be found in moderately susceptible species like hares. The symptoms are granulomatous lesions in lungs, pericardium and kidneys.

Infected animals are usually easy to catch, moribund or even dead (Maurin(61)).

Rossow et al (2015) states that the prevalence in humans is 2% (Rossow(62)). Highest prevalence found in small mammals during outbreak in Central Europe is 3,9% (Gurycová(63)).

C. Zoonoses of foxes

The zoonosis that can be transmitted from foxes to human are Q-fever (see 2.2G), toxocariasis (see 2.2E) and echinococcus multilocularis.

Echinococcus multilocularis

This is considered one of the most serious parasitic zoonosis in Europe. The red foxes are the main definitive hosts. The natural intermediate host are voles, but a lot of animals can act as accidental hosts, for example monkeys, human, pigs and dogs. The larval stage of Echinococcus multilocularis causes Alveolar echinococcosis (AE). The infection is widely distributed in foxes, with a prevalence of 70% in some areas. RIVM states that the prevalence in The Netherlands is 10-13%. The prevalence in humans differs throughout Europe, and has to do with the prevalence in foxes. If the prevalence in foxes is high, the prevalence in human increases. However, there has not been reported a prevalence higher than 0,81 per 100.000 inhabitants (RIVM(29)). Foxes living in urban areas pose a threat to the public health and there is concern that that risk may rise due to the suspected geographical spread of the parasite (Conraths(64)).

In foxes the helminth colonizes the intestines, but it does not cause disease. In intermediate hosts and accidental hosts cysts are formed after oral intake of eggs excreted by foxes, which causes AE. The size, site and growth rate of the larval stage determine the symptoms. Most of the time, infection starts in the liver, causing local deviations. The larvae grow invasively to other organs and blood vessels. It can take five to fifteen years before clear symptoms show (RIVM(29)). In human AE is a very rare disease, but incidences have increased in recent years.

D. Zoonoses of rabbits

The zoonoses that can be transmitted from rabbits to human are: Pasteurellosis (see 2.2H), tularemia (see 2.3B), Q fever (see 2.2G), dermatophytosis (see 2.2C) and cryptosporidiosis.

Cryptosporidiosis

Cryptosporidium is a protozoa. It is considered the most important zoonotic pathogen causing diarrhea in humans and animals. In rabbits, Cryptosporidium cuniculus (rabbit genotype) is the most common genotype (Zhang(65)). Two large studies have been done in rabbits, they showed a prevalence between 0,0% and 0,9% in rabbits (Robinson(66)).

The risks of cryptosporidiosis for the public health from wildlife are poorly understood. No studies of the host range and biological features of the Cryptosporidium rabbit genotype were identified. However human-infectious Cryptosporidium (including Cryptosporidium parvum) have caused experimental infections in rabbits and there is some evidence that his occurs naturally (Robinson(66)).

In human and neonatal animals, the pathogen causes gastroenteritis, chronic diarrhea or even severe diarrhea (Zhang(65), Robinson(66)). In >98% of these cases, the disease is caused by C. hominis or C. parvum, but recently, the rabbit genotype has emerged as a human pathogen. Little is known yet about this genotype, because only a few cases in humans were reported (Robinson(66)). Since little isolates have been found in humans and little is known about human infection with Cryptosporidium rabbit genotype, Robinson et al (2008) assumed this genotype is insignificant to public health and further investigation is needed (Robinson(67)).

E. Zoonoses of hedgehogs

Hedgehogs pose a risk for a number of potential zoonotic disease, for example microbial infections like Salmonella spp, Yersinia pseudotuberculosis, Mycobacterium marinum and dermatophytosis.

Salmonellosis

Salmonellosis is the most important zoonotic disease in hedgehogs. The prevalence of Salmonella in hedgehogs is 18,9%. The infection can either be asymptomatic or symptomatic. The hedgehogs that do show symptoms can display anorexia, diarrhea and weight loss. Humans get infected through ingestion of the bacteria, after handling the hedgehog or contact with feces (Riley(68)).

The Salmonella serotypes that are associated with hedgehogs are S. tilene and S. typhimurium (Woodward(69), Riley(68)).

Clinical manifestations in human (mainly adults) of both serotypes involve self-limiting gastroenteritis (including headache, malaise, nausea, fever, vomiting, abdominal pain and diarrhea (Woodward(69))), but bacteriamia, localized and endovascular infections may also occur (Crum Cianflone(70)). Infection with S. typhimurium and S. tilene is rare in humans, approximately 0,057 per 100.000 inhabitants (CDC(71))

Yersinia pseudotuberculosis.

No clinical symptoms for Yersinia pseudotuberculosis infection in hedgehogs are described in the literature. However, this bacteria causes a gastroenteritis in humans, characterized by a self-limiting mesenteric lymphadenitis, which mimics appendicitis. Complications can occur, which include erythema nodosum and reactive arthritis (Riley(68)). Since only Riley et al (2005) reported a case concerning Y. pseudotuberculosis, no information in available yet about the prevalence in hedgehogs or humans, or about the route of transmission. Although Riley et al (2005) claim that the zoonosis in commonly occurring (Riley(68)).

Myobacterium marinum

Mycobacterium marinum infection is not common in hedgehogs. The bacteria causes systemic myocbacteriosis. The porte d’entrée of the bacteria is through a wound or abrasion in the skin and the bacteria spreads systemically through the lymphatic system. This is also the way in which hedgehogs transmit the bacteria to human; the spines of the hedgehog can cause wounds and the bacteria can enter. Symptoms in human consist of clusters of papules or superficial nodules and can be painful. (Riley(68)). No information is reported regarding the prevalence of the bacteria in hedgehogs or humans.

Dermatophytosis

Dermatophytosis has been seen in hedgehogs. The most isolated dermatophyte is Trichophyton mentagrophytes var. erinacei. Microsporum spp. have also been reported. Lesions in the hedgehog are similar to those in other species: nonpruritic , dry, scaly skin with bald patches and spine loss. Hedgehogs can also be asymptomatic carriers, and that is a risk for potential zoonotic transmission (Riley(68)).

In human, Trichophyton mentagrophytes var. erinacei causes a local rash with pustules at the edges and an intensely irritating and thickened area in centre of the lesion. This usually resolves spontaneously after 2-3 weeks (Riley(68)).

Few cases of Trichophyton mentagrophytes var. erinacei have been reported (Pierard-Franchimont(72), Schauder(73), Keymer(74)), but no prevalence is known for humans and hedgehogs.

F. Zoonoses of bats

According to Calisher et al (2009) bat viruses that are proven to cause highly pathogenic disease in human are rabies virus and related lyssaviruses, Nipah and Hendra viruses, and SARS-CoV-like virus (Calisher(75)). Only the former is relevant for this review, since Nipah and Hendra do not occur in Europe (Munir(76)) and SARS is not directly transmitted to human (Hu(77)).

Rabies virus and related lyssaviruses

The rabies virus is present in the saliva of infected animals. Accordingly, the virus is transmitted from mammals to human through a bite (Calisher(75)).

Symptoms are equal in animals and humans. The disease starts with a prodromal stage. Symptoms are non-specific, and consist of fever, itching and pain near the site of the bite wound.

Subsequently follows the furious stage. Clinical features are hydrophobia (violent inspiratory muscle spasms, hyperextension and anxiety after attempts to drink), hallucinations, fear, aggression, cardiac tachyarrhythmias, paralysis and coma.

The final stage is the paralytic stage. It is characterized by ascending paralysis and loss of tendon reflexes, sphincter dysfunction, bulbar/respiratory paralysis, sensory symptoms, fever, sweating, gooseflesh and fasciculation.

Untreaded, the disease is fatal in approximately five days after showing the first symptoms (Warrell(78)).

Lyssaviruses from bats are related to the rabies virus. There are seven lyssavirus genotypes. Some of these cause disease in human, similar to rabies. Others, on the other hand, do not cause disease. Although it is still unclear, transmission is thought to be through bites (Calisher(75)).

Since 1977 4 cases of human rabies coming from a bat bite have been reported in The Netherlands. In bats living there, the prevalence is 7% (RIVM).

2016-3-12-1457784290

Sickle-cell conditions

NORMAL HEMOGLOBIN STRUCTURE:

Hemoglobin is present in erythrocytes and is important for normal oxygen delivery to tissues. Hemoglobinopathies are disorders affecting the structure, function or production of hemoglobin.

Different hemoglobins are produced during embryonic, fetal and adult life. Each consists of a tetramer of globin polypeptide chains: a pair of ”-like chains 141 amino acids long and a pair of ”-like chains 146 amino acids long. The major adult hemoglobin, HbA has the structure ”2”2. HbF (”2”2) predominates during most of gestation and HbA2 (”2”2) is the minor adult hemoglobin.

Each globin chain surrounds a single heme moiety, consisting of a protoporphyrin IX ring complexed with a single iron atom in the ferrous state (Fe2+). Each heme moiety can bind a single oxygen molecule; a molecule of hemoglobin can transport up to four oxygen molecules as each hemoglobin contains four heme moieties.

The amino acid sequences of various globins are highly homologous to one another and each has a highly helical secondary structure. Their globular tertiary structures cause the exterior surfaces to be rich in polar (hydrophilic) amino acids that enhance solubility and the interior to be lined with nonpolar groups, forming a hydrophobic pocket into which heme is inserted Numerous tight interactions (i.e.,”1”1 contacts) hold the ” and ” chains together. The complete tetramer is held together by interfaces (i.e., ”1”2 contacts) between the ”-like chain of one dimer and the non-” chain of the other dimer. The hemoglobin tetramer is highly soluble, but individual globin chains are insoluble. (Unpaired globin precipitates, forming inclusions that damage the cell and can trigger apoptosis. Normal globin chain synthesis is balanced so that each newly synthesized ” or non-” globin chain will have an available partner with which to pair.)

FUNCTION OF HEMOGLOBIN:

Solubility and reversible oxygen binding are the two important functions which were deranged in hemoglobinopathies. Both depend mostly on the hydrophilic surface amino acids, the hydrophobic amino acids lining the heme pocket, a key histidine in the F helix and the amino acids forming the ”1”1 and ”1”2 contact points. Mutations in these strategic regions alter oxygen affinity or solubility.

Principal function of Hb is to transport oxygen and delivery to tissue which is represented most appropriately by oxygen dissociation curve (ODC).

Fig: The well-known sigmoid shape of the oxygen dissociation curve (ODC), which reflects the allosteric properties of haemoglobin.

Hemoglobin binds with O2 efficiently at the partial pressure of oxygen (Po2) of the alveolus, retains it in the circulation and releases it to tissues at the Po2 of tissue capillary beds. The shape of the curve is due to co-operativity between the four haem molecules. When one takes up oxygen, the affinity for oxygen of the remaining haems of the tetramer increases dramatically. This is because haemoglobin can exist in two configurations – deoxy (T) and oxy (R). The T form has a lower affinity than the R form for ligands such as oxygen.

Oxygen affinity is controlled by several factors. The Bohr effect (e.g. oxygen affinity is decreased with increasing CO2 tension) is the ability of hemoglobin to deliver more oxygen to tissues at low Ph. The major small molecule that alters oxygen affinity in humans is 2,3-bisphosphoglycerate (2,3-BPG; formerly 2,3-DPG) which lowers oxygen affinity when bound to hemoglobin. HbA has a reasonably high affinity for 2,3-BPG. HbF does not bind 2,3-BPG, so it tends to have a higher oxygen affinity in vivo. Increased levels of DPG, with an associated decrease in P50 (partial pressure at which haemoglobin is 50 per cent saturated), occur in anaemia, alkalosis, hyperphosphataemia, hypoxic states and in association with a number of red cell enzyme deficiencies.

Thus proper oxygen transport depends on the tetrameric structure of the proteins, the proper arrangement of hydrophilic and hydrophobic amino acids and interaction with protons or 2,3-BPG.

GENETICS OF HEMOGLOBIN:

The human hemoglobins are encoded in two tightly linked gene clusters; the ”-like globin genes are clustered on chromosome 16, and the ”-like genes on chromosome 11. The ”-like cluster consists of two ”-globin genes and a single copy of the ” gene. The non-” gene cluster consists of a single ” gene, the G” and A” fetal globin genes, and the adult ” and ” genes. The ”-like cluster consists of two ”-globin genes and a single copy of the ” gene. The non-” gene cluster consists of a single ” gene, the G” and A” fetal globin genes, and the adult ” and ” genes.

DEVELOPMENTAL BIOLOGY OF HUMAN HEMOGLOBINS:

Red cells first appearing at about 6 weeks after conception contain the embryonic hemoglobins Hb Portland (”2”2), Hb Gower I (”2”2) and Hb Gower II (”2”2). At 10’11 weeks, fetal hemoglobin (HbF; ”2”2) becomes predominant and synthesis of adult hemoglobin (HbA; ”2”2) occurs at about 38 weeks. Fetuses and newborns therefore require ”-globin but not ”-globin for normal gestation. Small amounts of HbF are produced during postnatal life. A few red cell clones called F cells are progeny of a small pool of immature committed erythroid precursors (BFU-e) that retain the ability to produce HbF. Profound erythroid stresses, such as severe hemolytic anemias, bone marrow transplantation, or cancer chemotherapy, cause more of the F-potent BFU-e to be recruited. HbF levels thus tend to rise in some patients with sickle cell anemia or thalassemia. This phenomenon probably explains the ability of hydroxyurea to increase levels of HbF in adult and agents such as butyrate and histone deacetylase inhibitors can also activate fetal globin genes partially after birth.

HEMOGLOBINOPATHIES:

Hemoglobinopathies are disorders affecting the structure, function or production of hemoglobin. These conditions are usually inherited and range in severity from asymptomatic laboratory abnormalities to death in utero. Different forms may present as hemolytic anemia, erythrocytosis, cyanosis or vaso-occlusive stigmata.

Structural hemoglobinopathies occur when mutations alter the amino acid sequence of a globin chain, altering the physiologic properties of the variant hemoglobins and producing the characteristic clinical abnormalities. The most clinically relevant variant hemoglobins polymerize abnormally as in sickle cell anemia or exhibit altered solubility or oxygen-binding affinity.

Thalassemia syndromes arise from mutations that impair production or translation of globin mRNA leading to deficient globin chain biosynthesis. Clinical abnormalities are attributable to the inadequate supply of hemoglobin and imbalances in the production of individual globin chains, leading to premature destruction of erythroblasts and RBC. Thalassemic hemoglobin

variants combine features of thalassemia (e.g., abnormal globin biosynthesis) and of structural hemoglobinopathies (e.g., an abnormal amino acid sequence).

Hereditary persistence of fetal hemoglobin (HPFH) is characterized by synthesis of high levels of fetal hemoglobin in adult life. Acquired hemoglobinopathies include modifications of the hemoglobin molecule by toxins (e.g., acquired methemoglobinemia) and clonal abnormalities of hemoglobin synthesis (e.g., high levels of HbF production in preleukemia and ” thalassemia in myeloproliferative disorders).

There are five major classes of hemoglobinopathies.

Classification of hemoglobinopathies:

CLASS HEMOGLOBINOPATHIES

1 Structural hemoglobinopathies’hemoglobins with altered amino acid sequences that result in deranged function or altered physical or chemical properties

A. Abnormal hemoglobin polymerization’HbS, hemoglobin sickling

B. Altered O2 affinity

1. High affinity’polycythemia

2. Low affinity’cyanosis, pseudoanemia

C. Hemoglobins that oxidize readily

1. Unstable hemoglobins’hemolytic anemia, jaundice

2. M hemoglobins’methemoglobinemia, cyanosis

2 Thalassemias’defective biosynthesis of globin chains

A. ” Thalassemias

B. ” Thalassemias

C. ”, ”, ” Thalassemias

3 Thalassemic hemoglobin variants’structurally abnormal Hb associated with coinherited thalassemic phenotype

A. HbE

B. Hb Constant Spring

C. Hb Lepore

4 Hereditary persistence of fetal hemoglobin’persistence of high levels of HbF into adult life

5 Acquired hemoglobinopathies

A. Methemoglobin due to toxic exposures

B. Sulfhemoglobin due to toxic exposures

C. Carboxyhemoglobin

D. HbH in erythroleukemia

E. Elevated HbF in states of erythroid stress and bone marrow dysplasia

TABLE 127

GENETICS OF SICKLE HEMOGLOBINOPATHY:

This genetic disorder is due to the mutation of a single nucleotide, from a GAG to GTG codon on the coding strand, which is transcribed from the template strand into a GUG codon. Based on genetic code, GAG codon translates to glutamic acid while GUG codon translates to valine amino acid at position 6. This is normally a benign mutation, causing no apparent effects on the secondary, tertiary, or quaternary structures of hemoglobin in conditions of normal oxygen concentration. But under conditions of low oxygen concentration, the deoxy form of hemoglobin exposes a hydrophobic patch on the protein between the E and F helices. The hydrophobic side chain of the valine residue at position 6 of the beta chain in hemoglobin is able to associate with the hydrophobic patch, causing hemoglobin S molecules to aggregate and form fibrous precipitates. It also exhibits changes in solubility and molecular stability.

These properties are responsible for the profound clinical expressions of the sickling syndromes.

HbSS disease or sickle cell anemia (the most common form) – Homozygote for the S globin with usually a severe or moderately severe phenotype and with the shortest survival
HbS/”0 thalassemia – Double heterozygote for HbS and b-0 thalassemia; clinically indistinguishable from sickle cell anemia (SCA)
HbS/”+ thalassemia – Mild-to-moderate severity with variability in different ethnicities
HbSC disease – Double heterozygote for HbS and HbC characterized by moderate clinical severity
HbS/hereditary persistence of fetal Hb (S/HPHP) – Very mild or asymptomatic phenotype
HbS/HbE syndrome – Very rare with a phenotype usually similar to HbS/b+ thalassemia
Rare combinations of HbS with other abnormal hemoglobins such as HbD Los Angeles, G-Philadelphia and HbO Arab

Sickle-cell conditions have an autosomal recessive pattern of inheritance from parents. The types of hemoglobin a person makes in the red blood cells depends on what hemoglobin genes are inherited from her or his parents. If one parent has sickle-cell anaemia and the other has sickle-cell trait, then the child has a 50% chance of having sickle-cell disease and a 50% chance of having sickle-cell trait. When both parents have sickle-cell trait, a child has a 25% chance of sickle-cell disease, 25% do not carry any sickle-cell alleles, and 50% have the heterozygous condition.

The allele responsible for sickle-cell anemia can be found on the short arm of chromosome 11, more specifically 11p15.5. A person who receives the defective gene from both father and mother develops the disease; a person who receives one defective and one healthy allele remains healthy, but can pass on the disease and is known as a carrier or heterozygote. Several sickle syndromes occur as the result of inheritance of HbS from one parent and another hemoglobinopathy, such as ” thalassemia or HbC (”2”2 6 Glu’Lys), from the other parent. The prototype disease, sickle cell anemia, is the homozygous state for HbS.

PATHOPHYSIOLOGY:

The sickle cell syndromes are caused by mutation in the ”-globin gene that changes the sixth amino acid from glutamic acid to valine. HbS (”2”2 6 Glu’Val) polymerizes reversibly when deoxygenated to form a gelatinous network of fibrous polymers that stiffen the RBC membrane, increase viscosity, and cause dehydration due to potassium leakage and calcium influx. These changes also produce the sickle shape. The loss of red blood cell elasticity is central to the pathophysiology of sickle-cell disease. Sickled cells lose the flexibility needed to traverse small capillaries. They possess altered ‘sticky’ membranes that are abnormally adherent to the endothelium of small venules.

Repeated episodes of sickling damage the cell membrane and decrease the cell’s elasticity. These cells fail to return to normal shape when normal oxygen tension is restored. As a consequence, these rigid blood cells are unable to deform as they pass through narrow capillaries, leading to vessel occlusion and ischaemia.

These abnormalities stimulate unpredictable episodes of microvascular vasoocclusion and premature RBC destruction (hemolytic anemia). The rigid adherent cells clog small capillaries and venules, causing tissue ischemia, acute pain, and gradual end-organ damage. This venoocclusive component usually influences the clinical course.

The actual anaemia of the illness is caused by hemolysis which occurs because the spleen destroys the abnormal RBCs detecting the altered shape of red cells. Although the bone marrow attempts to compensate by creating new red cells, it does not match the rate of destruction. Healthy red blood cells typically function for 90’120 days, but sickled cells only last 10’20 days.

Clinical Manifestations of Sickle Cell Anemia:

Patients with sickling syndromes suffer from hemolytic anemia, with hematocrits from 15 to 30%, and significant reticulocytosis. Anemia was once thought to exert protective effects against vasoocclusion by reducing blood viscosity. The role of adhesive reticulocytes in vasoocclusion might account for these paradoxical effects.

Granulocytosis is common. The white count can fluctuate substantially and unpredictably during and between painful crises, infectious episodes, and other intercurrent illnesses.

Vasoocclusion causes protean manifestations and cause episodes of ischemic pain (i.e., painful crises) and ischemic malfunction or frank infarction in the spleen, central nervous system, bones, joints, liver, kidneys and lungs.

Syndromes cause by sickle hemoglobinopathy:

Painful crises: Intermittent episodes of vasoocclusion in connective and musculoskeletal structures produce ischemia manifested by acute pain and tenderness, fever, tachycardia and anxiety. These episodes are recurrent and it is the most common clinical manifestation of sickle cell anemia. Their frequency and severity vary greatly. Pain can develop almost anywhere in the body and may last from a few hours to 2 weeks.

Repeated crises requiring hospitalization (>3 episodes per year) correlate with reduced survival in adult life, suggesting that these episodes are associated with accumulation of chronic end-organ damage. Provocative factors include infection, fever, excessive exercise, anxiety, abrupt changes in temperature, hypoxia, or hypertonic dyes.

Acute chest syndrome: Distinctive manifestation characterized by chest pain, tachypnea, fever, cough, and arterial oxygen desaturation. It can mimic pneumonia, pulmonary emboli, bone marrow infarction and embolism, myocardial ischemia, or lung infarction. Acute chest syndrome is thought to reflect in situ sickling within the lung, producing pain and temporary pulmonary dysfunction. Pulmonary infarction and pneumonia are the most common underlying or concomitant conditions in patients with this syndrome. Repeated episodes of acute chest pain correlate with reduced survival. Acutely, reduction in arterial oxygen saturation is especially ominous because it promotes sickling on a massive scale. Chronic acute or subacute pulmonary crises lead to pulmonary hypertension and cor pulmonale, an increasingly common cause of death in patients.

Aplastic crisis: A serious complication is the aplastic crisis. This is caused by infection with Parvovirus B-19 (B19V). This virus causes fifth disease, a normally benign childhood disorder associated with fever, malaise, and a mild rash. This virus infects RBC progenitors in bone marrow, resulting in impaired cell division for a few days. Healthy people experience, at most, a slight drop in hematocrit, since the half-life of normal erythrocytes in the circulation is 40-60 days. In people with SCD however, the RBC lifespan is greatly shortened (usually 10-20 days), and a very rapid drop in Hb occurs. The condition is self-limited, with bone marrow recovery occurring in 7-10 days, followed by brisk reticulocytosis.

CNS sickle vasculopathy: Chronic subacute central nervous system damage in the absence of an overt stroke is a distressingly common phenomenon beginning in early childhood. Stroke is especially common in children and may reoccur, but is less common in adults and is often hemorrhagic. Stroke affects 30% of children and 11% of patients by 20 years. It is usually ischemic in children and hemorrhagic in adults.

Modern functional imaging techniques have indicated circulatory dysfunction of the CNS; these changes correlate with display of cognitive and behavioral abnormalities in children and young adults. It is important to be aware of these changes because they can complicate clinical management or be misinterpreted as ‘difficult patient’ behaviors.

Splenic sequestration crisis: The spleen enlarges in the latter part of the first year of life in children with SCD. Occasionally, the spleen undergoes a sudden very painful enlargement due to pooling of large numbers of sickled cells. This phenomenon is known as splenic sequestration crisis. Over time, the spleen becomes fibrotic and shrinks causing autosplenectomy. In cases of SC trait, the spleenomegaly may persist upto adulthood due to ongoing hemolysis under the influence of persistent fetal hemoglobin.

Acute venous obstruction of the spleen a rare occurrence in early childhood, may require emergency transfusion and/or splenectomy to prevent trapping of the entire arterial output in the obstructed spleen. Repeated microinfarction can destroy tissues having microvascular beds, thus, splenic function is frequently lost within the first 18’36 months of life, causing susceptibility to infection, particularly by pneumococci.

Infections: Life-threatening bacterial infections are a major cause of morbidity and mortality in patients with SCD. Recurrent vaso-occlusion induces splenic infarctions and consequent autosplenectomy, predisposing to severe infections with encapsulated organisms (eg, Haemophilus influenzae, Streptococcus pneumoniae).

Cholelithiasis: Cholelithiasis is common in children with SCD as chronic hemolysis with hyperbilirubinemia is associated with the formation of bile stones. Cholelithiasis may be asymptomatic or result in acute cholecystitis, requiring surgical intervention. The liver may also become involved. Cholecystitis or common bile duct obstruction can occur. Child with cholecystitis presents with right upper quadrant pain, especially if associated with fatty food. Common bile duct blockage suspected when a child presents with right upper quadrant pain and dramatically elevated conjugated hyperbilirubinemia.

Leg ulcers: Leg ulcers are a chronic painful problem. They result from minor injury to the area around the malleoli. Because of relatively poor circulation, compounded by sickling and microinfarcts, healing is delayed and infection occurs frequently.

Eye manifestation: Occlusion of retinal vessels can produce hemorrhage, neovascularization, and eventual detachments.

Renal manifestation: Renal menifestations include impaired urinary concentrating ability, defects of urinary acidification, defects of potassium excretion and progressive decrease in glome”rular filtration rate with advancing age. Recurrent hematuria, proteinuria, renal papillary necrosis and end-stage renal disease (ESRD) are all well recognized.

Renal papillary necrosis invariably produces isosthenuria. More widespread renal necrosis leads to renal failure in adults, a common late cause of death.

Bone manifestation: Bone and joint ischemia can lead to aseptic necrosis, common in the femoral or humeral heads; chronic arthropathy; and unusual susceptibility to osteomyelitis, which may be caused by organisms, such as Salmonella, rarely encountered in other settings.

-The hand-foot syndrome is caused by painful infarcts of the digits and dactylitis.

Pregnancy in SCD: Pregnancy represents a special area of concern. The high rate of fetal loss is due to spontaneous abortion. Placenta previa and abruption are common due to hypoxia and placental infarction. At birth, the infant often is premature or has low birth weight.

Other features: Particularly painful complication in males is priapism, due to infarction of the penile venous outflow tracts; permanent impotence may also occur. Chronic lower leg ulcers probably arise from ischemia and superinfection in the distal circulation.

Sickle cell syndromes are remarkable for their clinical heterogeneity. Some patients remain virtually asymptomatic into or even through adult life, while others suffer repeated crises requiring hospitalization from early childhood. Patients with sickle thalassemia and sickle-HbE tend to have similar, slightly milder symptoms, perhaps because of the bad effects of production of other hemoglobins within the RBC.

Clinical Manifestations of Sickle Cell Trait:

Sickle cell trait is often asymptomatic. Anemia and painful crises are rare. An uncommon but highly distinctive symptom is painless hematuria often occurring in adolescent males, probably due to papillary necrosis. Isosthenuria is a more common manifestation of the same process. Sloughing of papillae with urethral obstruction has been also seen, due to massive sickling or sudden death due to exposure to high altitudes or extremes of exercise and dehydration.

Pulmonary hypertension in sickle hemoglobinopathy:

In recent years, PAH a proliferative vascular disease of the lung, has been recognized as a major complication and an independent correlate with death among adults with SCD. Pulmonary hypertension is defined as a mean pulmonary artery pressure >25mmHg, and includes pulmonary artery hypertension, pulmonary venous hypertension or a combination of both. The etiology is multifactorial, including hemolysis, hypoxemia, thromboembolism, chronic high CO, and chronic liver disease. Clinical presentation is characterized by symptoms of dyspnea, chest pain, and syncope. It is important to note that high cardiac output can also elevate pulmonary artery pressure adding to the complex and multifactorial pathophysiology of PHT in sickle cell disease. Thus, if left untreated, the disease carries a high mortality rate, with the most common cause of death being decompensated right heart failure.

Prevalance and prognosis:

Echocardiographic screening studies have suggested that the prevalence of hemoglobinopathy-associated PAH is much higher than previously known. In SCD, approximately one-third of adult patients have an elevated tricuspid regurgitant jet velocity (TRV) of 2.5 m/s or higher, a threshold that correlates in right heart catheterization studies to a pulmonary artery systolic pressure of at least 30 mm Hg. Even though this threshold represents quite mild pulmonary hypertension, SCD patients with TRV above this threshold have a 9- to 10- fold higher risk for early mortality than those with a lower TRV. It appears that the baseline compromised oxygen delivery and co-morbid organ dysfunction of SCD diminishes the physiological reserve to tolerate even modest pulmonary arterial pressures.

Pathogenesis:

Different hemolytic anemias seem to involve common mechanisms for development of PAH. These processes probably include hemolysis, causing endothelial dysfunction, oxidative and inflammatory stress, chronic hypoxemia, chronic thromboembolism, chronic liver disease, iron overload, and asplenia.

Hemolysis results in the release of hemoglobin into plasma, where it reacts and consumes nitric oxide (NO) causing a state of resistance to NO-dependent vasodilatory effects. Hemolysis also causes the release of arginase into plasma, which decreases the concentration of arginine, substrate for the synthesis of NO. Other effects associated with hemolysis that can contribute to the pathogenesis of pulmonary hypertension are increased cellular expression of endothelin, production of free radicals, platelet activation, and increased expression of endothelial adhesion mediating molecules.

Previous studies suggest that splenectomy (surgical or functional) is a risk factor for the development of pulmonary hypertension, especially in patients with hemolytic anemias. It is speculated that the loss of the spleen increases the circulation of platelet mediators and senescent erythrocytes that result in platelet activation (promoting endothelial adhesion and thrombosis in the pulmonary vascular bed), and possibly stimulates the increase in the intravascular hemolysis rate.

Vasoconstriction, vascular proliferation, thrombosis, and inflammation appear to underlie the development of PAH. In long-standing PH, intimal proliferation and fibrosis, medial hypertrophy, and in situ thrombosis characterize the pathologic findings in the pulmonary vasculature. Vascular remodeling at earlier stages may be confined to the small pulmonary arteries. As the disease advances, intimal proliferation and pathologic remodeling progress, resulting in decreased compliance and increased elastance of the pulmonary vasculature.

The outcome is a progressive increase in the right ventricular afterload or total pulmonary vascular resistance (PVR) and, thus, right ventricular work.

Chronic pulmonary involvement due to repeated episodes of acute thoracic syndrome can lead to pulmonary fibrosis and chronic hypoxemia, which can eventually lead to the development of pulmonary hypertension.

Coagulation disorders, such as low levels of protein C, low levels of protein S, high levels of D-dimers and increased activity of the tissue factor, occur in patients with sickle cell anemia.This hypercoagulable state can cause thrombosis in situ or pulmonary thromboembolism, which occurs in patients with sickle cell anemia and other hemolytic anemias.

Clinical manifestations:

On examination, there may be evidence of right ventricular failure with elevated jugular venous pressure, lower extremity edema, and ascites. The cardiovascular examination may reveal an accentuated P2 component of the second heart sound, a right-sided S3 or S4, and a holosystolic tricuspid regurgitant murmur. It is also important to seek signs of the diseases that are often concurrent with PH: clubbing may be seen in some chronic lung diseases, sclerodactyly and telangiectasia may signify scleroderma, and crackles and systemic hypertension may be clues to left-sided systolic or diastolic heart failure.

Diagnostic evaluation:

The diagnosis of pulmonary hypertension in patients with sickle cell anemia is typically difficult. Dyspnea on exertion, the symptom most typically associated with pulmonary hypertension, is also very common in anemic patients. Other disorders with similar symptomatology, such as left heart failure or pulmonary fibrosis, frequently occur in patients with sickle cell anemia. Patients with pulmonary hypertension are often older, have higher systemic blood pressure, more severe hemolytic anemia, lower peripheral oxygen saturation, worse renal function, impaired liver function and a higher number of red blood cell transfusions than do patients with sickle cell anemia and normal pulmonary pressure.

The diagnostic evaluation of patients with hemoglobinopathies and suspected of having pulmonary hypertension should follow the same guidelines established for the investigation of patients with other causes of pulmonary hypertension.

Echocardiography: Echocardiography is important for the diagnosis of PAH and often essential for determining the cause. All forms of PAH may demonstrate a hypertrophied and dilated right ventricle with elevated estimated pulmonary artery systolic pressure. Important additional information can be obtained about specific etiologies such as valvular disease, left ventricular systolic and diastolic function, intracardiac shunts, and other cardiac diseases.

An echocardiogram is a screening test, whereas invasive hemodynamic monitoring is the gold standard for diagnosis and assessment of disease severity.

Pulmonary artery (PA) systolic pressure (PASP) can be estimated by Doppler echocardiography, utilizing the tricuspid regurgitant velocity (TRV). Increased TRV is estimated to be present in approximately one-third of adults with SCD and is associated with early mortality. In the more severe cases, increased TRV is associated with histopathologic changes similar to atherosclerosis such as plexogenic changes and hyperplasia of the pulmonary arterial intima and media.

The cardiopulmonary exercise test (CPET): This test may help to identify a true physiologic limitation as well as differentiate between cardiac and pulmonary causes of dyspnea but test can only be performed if patient has reasonable functional capacity. If this test is normal, there is no indication for a right heart catheterization.

Right Heart Catheterization: If patient has cardiovascular limitation to exercise, a right heart catheterization should be inserted. Right heart catheterization with pulmonary vasodilator testing remains the gold standard both to establish the diagnosis of PH and to enable selection of appropriate medical therapy. The definition of precapillary PH or PAH requires (1) an increased mean pulmonary artery pressure (mPAP ’25 mmHg); (2) a pulmonary capillary wedge pressure (PCWP), left atrial pressure, or left ventricular end-diastolic pressure ’15 mmHg; and (3) PVR >3 Wood units. Postcapillary PH is differentiated from precapillary PH by a PCWP of ’15 mmHg; this is further differentiated into passive, based on a transpulmonary gradient <12 mmHg, or reactive, based on a transpulmonary gradient >12 mmHg and an increased PVR. In either case, the CO may be normal or reduced. If the echocardiogram or cardiopulmonary exercise test (CPET) suggests PH and the diagnosis is confirmed by catheterization.

Chest imaging and lung function tests: These are essential because lung disease is an important cause of PH. A sign of PH that may be evident on chest x-ray include enlargement of the central pulmonary arteries associated with ‘vascular pruning,’ a relative paucity of peripheral vessels. Cardiomegaly, with specific evidence of right atrial and ventricular enlargement may present. The chest x-ray may also demonstrate significant interstitial lung disease or suggest hyperinflation from obstructive lung disease, which may be the underlying cause or contributor to the development of PH.

High-resolution computed tomography (CT): Classic findings of PH on CT include those found on chest x-ray: enlarged pulmonary arteries, peripheral pruning of the small vessels, and enlarged right ventricle and atrium. High-resolution CT may also show signs of venous congestion including centrilobular ground-glass infiltrate and thickened septal lines. In the absence of left heart disease, these findings suggest pulmonary veno-occlusive disease, a rare cause of PAH that can be quite challenging to diagnose.

CT angiograms: Commonly used to evaluate acute thromboembolic disease and have demonstrated excellent sensitivity and specificity for that purpose.

Ventilation-perfusion Ratio: Scanning done for screening because of its high sensitivity and its role in qualifying patients for surgical intervention. Negative ratio virtually rules out CTEPH, some cases may be missed through the use of CT angiograms.

Pulmonary function test: Isolated reduction in DLco is the classic finding in PAH, results of pulmonary function tests may also suggest restrictive or obstructive lung diseases as the cause of dyspnea or PH.

Evaluation of symptoms and functional capacity (6 Min walk test): Although the 6-minute walk test has not been validated in patients with hemoglobinopathies, preliminary data suggest that this test correlates well with maximal oxygen uptake and with the severity of pulmonary hypertension in patients with sickle cell anemia. In addition, in these patients, the distance covered on the 6-minute walk test significantly improves with the treatment of pulmonary hypertension, which suggests that it can be used in this population.

DYSLIPIDEMIA IN SICKLE HEMOGLOBINOPATHY:

Disorders of lipoprotein metabolism are known as ‘dyslipidemias.’ Dyslipidemias are generally characterized clinically by increased plasma levels of cholesterol, triglycerides, or both, accompanied by reduced levels of HDL cholesterol. Mostly all patients with dyslipidemia are at increased risk for ASCVD, the primary reason for making the diagnosis, as intervention may reduce this risk. Patients with elevated levels of triglycerides may be at risk for acute pancreatitis and require intervention to reduce this risk.

Hundreds of proteins affect lipoprotein metabolism and may interact to produce dyslipidemia in an individual patient, there are a limited number of discrete ‘nodes’ that regulate lipoprotein metabolism. These include:

(1) assembly and secretion of triglyceriderich VLDLs by the liver;

(2) lipolysis of triglyceride-rich lipoproteins by LPL;

(3) receptor-mediated uptake of apoB-containing lipoproteins by the liver;

(4) cellular cholesterol metabolism in the hepatocyte and the enterocyte; and

(5) neutral lipid transfer and phospholipid hydrolysis in the plasma.

Hypocholesterolemia and, to a lesser extent, hypertriglyceridemia have been documented in SCD cohorts worldwide for over 40 years, yet the mechanistic basis and physiological ramifications of these altered lipid levels have yet to be fully elucidated. Cholesterol (TC, HDL-C and LDL-C) levels decreased and triglyceride levels increased in relation to severity of anemia. While not true for cholesterol levels, triglyceride levels show a strong correlation with markers of severity of hemolysis, endothelial activation, and pulmonary hypertension.

Decreased TC and LDL-C in SCD has been documented in virtually every study that examined lipids in SCD adults (el-Hazmi, et al 1987, el-Hazmi, et al 1995, Marzouki and Khoja 2003, Sasaki, et al 1983, Shores, et al 2003, Stone, et al 1990, Westerman 1975),

with slightly more variable results in SCD children. Although it might be hypothesized that SCD hypocholesterolemia results from increased cholesterol utilization during the increased erythropoiesis of SCD, cholesterol is largely conserved through the enterohepatic circulation, at least in healthy individuals, and biogenesis of new RBC membranes would likely use recycled cholesterol from the hemolyzed RBCs. Westerman demonstrated that hypocholesterolemia was not due merely to increased RBC synthesis by showing that it is present in both hemolytic and non-hemolytic anemia (Westerman 1975). He also reports that serum cholesterol was proportional to the hematocrit, suggesting serum cholesterol may be in equilibrium with the cholesterol reservoir of the total red cell mass (Westerman 1975). Consistent with such equilibration, tritiated cholesterol incorporated into sickled erythrocytes is rapidly exchanged with plasma lipoproteins (Ngogang, et al 1989). Thus, low plasma cholesterol appears to be a consequence of anemia itself rather than increased RBC production (Westerman 1975).

Total cholesterol, in particular LDL-C, has a well-established role in atherosclerosis. The low levels of LDL-C in SCD are consistent with the low levels of total cholesterol and the virtual absence of atherosclerosis among SCD patients. Decreased HDL-C in SCD has also been documented in some previous studies(Sasaki, et al 1983, Stone, et al 1990). As in lipid studies for other disorders in which HDL-C is variably low, potential reasons for inconsistencies between studies include differences in age, diet, weight, smoking, gender, small sample sizes, different ranges of disease severity, and other diseases and treatments (Choy and Sattar 2009, Gotto A 2003). Decreased HDL-C and apoA-I is a known risk factor for endothelial dysfunction in the general population and in SCD, a potential contributor in SCD to PH, although the latter effect size might be small (Yuditskaya, et al 2009).

In addition, triglyceride levels have been reported to increase during crisis. Why is increased triglyceride but not cholesterol in serum associated with vascular dysfunction and pulmonary hypertension? Studies in atherosclerosis have firmly established that lipolysis of oxidized LDL in particular results in vascular dysfunction. Lipolysis of triglycerides present in triglyceride-rich lipoproteins releases neutral and oxidized free fatty acids that induce endothelial cell inflammation (Wang, et al 2009). Many oxidized fatty acids are more damaging to the endothelium than their non-oxidized precursors; for example, 13-hydroxy octadecadienoic acid (13-HODE) is a more potent inducer of ROS activity in HAECs than linoleate, the nonoxidized precursor of 13-HODE(Wang, et al 2009). Lipolytic generation of arachidonic acid, eicosanoids, and inflammatory molecules leading to vascular dysfunction is a well-established phenomenon (Boyanovsky and Webb 2009). Although LDL-C levels are decreased in SCD patients, LDL from SCD patients is

more susceptible to oxidation and cytotoxicity to endothelium (Belcher, et al 1999) and an unfavorable plasma fatty acid composition has been associated with clinical severity of SCD (Ren, et al 2006). Lipolysis of phospholipids in lipoproteins or cell membranes by secretory phospholipase A2 (sPLA2) family members releases similarly harmful fatty acids, particularly in an oxidative environment (Boyanovsky and Webb 2009 ) and in fact selective PLA2 inhibitors are currently under development as potential therapeutic agents for atherosclerotic cardiovascular disease(Rosenson 2009). Finally, sPLA2 activity has been linked to lung disease in SCD. sPLA2 is elevated in acute chest syndrome of SCD and in conjunction with fever preliminarily appears to be a good biomarker for diagnosis, prediction and prevention of acute chest syndrome(Styles, et al 2000). The deleterious effects of phospholipid hydrolysis on lung vasculature predicts similar deleterious effects of triglyceride hydrolysis, particularly in the oxidatively stressed environment of SCD.

Elevated triglycerides have been documented in autoimmune inflammatory diseases with increased risk of vascular dysfunction and pulmonary hypertension, including systemic lupus erythematosus, scleroderma, rheumatoid arthritis, and mixed connective tissue diseases(Choy and Sattar 2009, Galie, et al 2005). In fact, triglyceride concentration is a stronger predictor of stroke than LDL-C or TC(Amarenco and Labreuche 2009). Even in healthy control subjects, a high-fat meal induces oxidative stress and inflammation, resulting in endothelial dysfunction and vasoconstriction(O’Keefe, et al 2008). Perhaps having high levels of plasma triglycerides promotes vascular dysfunction, with the clinical outcome of vasculopathy mainly in the coronary and cerebral arteries in the general population, and with more targeting to the pulmonary vascular bed in SCD and autoimmune diseases.

The mechanisms leading to hypocholesterolemia and hypertriglyceridemia in plasma or serum of SCD patients are not completely understood. In normal individuals, triglyceride levels are determined to a significant degree by body weight, diet and physical exercise, as well as concurrent diabetes. Diet and physical exercise very likely impact body weight and triglyceride levels in SCD patients also. These findings indicate that standard risk factors for high triglycerides are also relevant to SCD patients. Mechanisms of SCD-specific risk factors for elevated plasma triglycerides are not as clear. RBCs do not have de novo lipid synthesis (Kuypers 2008). In SCD the rate of triglyceride synthesis from glycerol is elevated up to 4-fold in sickled reticulocytes (Lane, et al 1976), but SCD patients have defects in post absorptive plasma homeostasis of fatty acids (Buchowski, et al 2007). Lipoproteins and albumin in plasma can contribute fatty acids to red blood cells for incorporation into membrane phospholipids (Kuypers 2008), but RBC membranes are not triglyceride-rich and contributions of RBCs to plasma triglyceride levels have not been described. Interestingly, chronic intermittent or stable hypoxia just by exposure to high altitudes, with no underlying disease, is sufficient to increase triglyceride levels in healthy subjects (Siques, et al 2007). Thus, it has also been suggested that hypoxia in SCD may contribute at least partially to the observed increase in serum triglyceride. Finally, there is a known link of low cholesterol and increased triglycerides that occurs in any primate acute phase response, such as infection and inflammation (Khovidhunkit, et al 2004). Perhaps because of their chronic hemolysis, SCD patients have a low level of acute phase response, which is also consistent with the other inflammatory markers. Further studies are required to elucidate the mechanisms leading to hypocholesterolemia and hypertriglyceridemia in SCD.

Pulmonary hypertension is a disease of the vasculature that shows many similarities with the vascular dysfunction that occurs in coronary atherosclerosis (Kato and Gladwin 2008). The similarities and differences are: They both have proliferative vascular smooth muscle cells ‘ just in different vascular beds. They both have an impaired nitric oxide axis, increased oxidant stress, and vascular dysfunction. Most importantly, serum triglyceride levels, previously linked to vascular dysfunction, are definitely shown to correlate with NT-proBNP and TRV and thus, with pulmonary hypertension. Moreover, triglyceride levels are predictive of TRV independent of systolic blood pressure, low transferrin or increased lactate dehydrogenase.

PAH in SCD is also characterized by oxidant stress but in SCD patients plasma total cholesterol (TC) and low density lipoprotein cholesterol (LDL-C) are low. There have been some reports of low HDL cholesterol (HDL-C)17,18 and increased triglyceride in SCD patients ‘ features widely recognized as important contributory factors in cardiovascular disease. These findings and the therapeutic potential to modulate serum lipids with several commonly used drugs prompted us to investigate in greater detail the serum lipid profile in patients with sickle hemoglobinopathy (SH) coming to our hospital and its possible relationship to vasculopathic complications such as PAH.

essay-2016-09-27-000BaY

Gender and Caste – The Cry for Identity of Women

INTRODUCTION

‘Bodies are just not biological phenomena but a complex social creation onto which meanings have been variously composed and imposed according to time and space’. These social creations differentiate the two biological personalities into Man and Woman and meanings to their qualities are imposed on the basis of gender which defines them as He and She.

The question then arises a woman ‘ who is she? According to me, a woman is the one who is empowered, enlightened, enthusiastic and energetic. A woman is all about sharing. She is an exceptional personality who encourages and embraces. If a woman is considered to be a mark of patience and courage then why even today there is a lack of identity in her personality. She is subordinated to man and often discriminated on gender basis.

The entire life of a woman revolves around the patriarchal existence as she is dominated by her father in the childhood, in the other phase of her life she is dominated by her husband and in the later phase by her son, which gives no space to her own independence.

The psychological and physical identity of a woman is defined through the role and control of men: the terrible trait of father-husband-son. The boundary of women is always restrained by male dominance. Gender discrimination is not only a historical concept but it still has its existence in the contemporary Indian Society.

Indian society in every part of its existence experiences the ferocious gender conflict which is everyday projected in the daily newspapers, news channels or even walking on the streets of Indian society. The horror of patriarchal domination exists in every corner of the Indian society. The role of Indian women has always been declining over the centuries.

Turning the pages of history, in the pre-Aryan India God was female and life was being represented in the form of mother Earth. People worshipped the mother Goddess for fertility symbols. The Shakti cult of Hinduism says women as the source and embodiment of cosmic power and energy. Woman power can also be shown through Goddess Durga who lured her husband Shiva from asceticism.

The religious and social condition abruptly changed when the Aryan Brahmins eliminated the Shakti cult and power was given in the hands of male group. They considered the male deities as the husbands of the female goddess providing the dominance in the hands of the male. Marriage was involvement of male control over female sexuality. Even the identity of mother goddess was dominated by the male gods. As Mrinal Pande writes, ‘to control women, it becomes necessary to control the womb and so Hinduism, Judaism, Islam and Christianity have all Stipulated, at one time or another, that the whole area of reproductive activity must be firmly monitored by law and lawmakers’ .

The issue of identity crisis for a woman

The identity of a woman is erased as she becomes a mere reproductive machine ruled and dominated by male laws. From the time she takes birth she is taught that one day, she has to get married and go to her husband’s house. Neither thus she belongs to her own house nor to her husband’s house leaving a mark on her identity. The Vedic times, however proved to be a boon in the lives of women as they enjoyed freedom of choice in aspect of husbands and could marry at mature age. Widows could remarry and women could divorce.

The segregation of women continued to raise the same question of identity as in the Chandogya Upanishad, a religious text of the pre-Buddhist era, contains a prayer of spiritual aspirants which says ‘May I never, ever, enter that reddish, white, toothless, slippery and slimy yoni of the woman’. During this time control over women included reclusion and exclusion and they were even denied education. Women and shudras were treated as the minority class in the society. Rights and privileges given to women were cancelled and girls were married at a very early age. Caste structure also played a great role as women were now discriminated within their own caste on gender basis.

According to Liddle, women were controlled under two aspects: firstly, they were disinherited from ancestral property, economy and were expected to remain under the domestic sphere known as purdah. The second aspect was the control of men over female sexuality. The death rituals of the family members were performed by the sons and no daughter had the right to fire their parent funeral.

A stifling patriarchal shadow hangs over the lives of ladies all through India. From all areas, ranks and classes of society, ladies are casualty of its oppressive, controlling impacts. Those subjected to the heaviest weight of separation are from the Dalit or “Planned Castes”, referred to in less liberal vote based times as the “Untouchables”. The name may have been banned however pervasive negative mentalities of psyche stay, as do the amazing levels of misuse and subjugation experienced by Dalit ladies. They encounter different levels of segregation and misuse, a lot of which is primitive, debasing, horrifyingly vicious and absolutely obtuse. The divisive position framework ‘ in operation all through India, “Old” and “New” ‘ together with biased sexual orientation demeanors, sits at the heart of the colossal human rights manhandle experienced by Dalit or “outcaste” ladies.

The lower positions are isolated from different individuals from the group, precluded from eating with “higher” standings, from utilizing town wells and lakes, entering town sanctuaries and higher rank houses, wearing shoes or notwithstanding holding umbrellas before higher stations; they are compelled to sit alone and use distinctive porcelain in eateries, restricted from cycling a bike inside their town and are made to cover their dead in a different cemetery. They every now and again confront ousting from their territory by higher “overwhelming” stations, compelling them to live on the edges of towns frequently on fruitless area.

This plenty of preference add up to politically-sanctioned racial segregation, and the time has come ‘ long past due ‘ that the “popularity based” legislature of India authorized existing enactment and cleansed the nation of the guiltiness of position and sexual orientation based separation and abuse.

The strategic maneuver of patriarchy soaks each range of Indian culture and offers ascend to an assortment of unfair practices, for example, female child murder, victimization young ladies and shares related passing. It is a noteworthy reason for misuse and manhandle of ladies, with a lot of sexual brutality being executed by men in positions of force. These reach from higher position men damaging lower rank ladies, particularly Dalit; policemen abusing ladies from poor family units; and military men mishandling Dalit and Adivasi ladies in rebellion states, for example, Kashmir, Chhattisgarh, Jharkhand, Orissa and Manipur. Security faculty are ensured by the generally condemned Armed Forces Special Powers Act, which gifts exemption to police and individuals from the military completing criminal demonstrations of assault and to be sure murder; it was proclaimed by the British in 1942 as a crisis measure, to stifle the Quit India Movement. It is an unreasonable law, which needs canceling.

In December 2012 the intolerable posse assault and mutilation of a 23-year-old paramedical understudy in New Delhi, who consequently kicked the bucket from her wounds, collected overall media consideration, putting a transient focus on the risks, persecution and shocking treatment ladies in India confront each day. Assault is endemic in the nation. With most instances of assault going unreported and numerous being released by police, the genuine figure could be 10 times this. The ladies most at danger of misuse are Dalit: the NCRB gauges that more than four Dalit-ladies are assaulted each day in India. An UN study uncovers that “the lion’s share of Dalit ladies report having confronted one or more episodes of verbal misuse (62.4 for every penny), physical attack (54.8 for each penny), inappropriate behavior and strike (46.8 for each penny), aggressive behavior at home (43.0 for every penny) and assault (23.2 for every penny)”. They are subjected to “assault, attack, seizing, snatching, crime physical and mental torment, shameless movement and sexual misuse.”

The UN found that extensive numbers were deterred from looking for equity: in 17 for each penny of occasions of savagery (counting assault) casualties were blocked from reporting the wrongdoing by the police; in more than 25 for each penny of cases the group ceased ladies recording grumblings; and in more than 40 for each penny ladies “did not endeavor to get legitimate or group solutions for the brutality basically out of apprehension of the culprits or social disrespect if (sexual) viciousness was uncovered”. In just 1 for every penny of recorded cases were the culprits sentenced. What “takes after episodes of viciousness”, the UN found, is “a resonating hush”. The impact with regards to Dalit ladies particularly, however not solely, “is the creation and upkeep of a society of brutality, quiet and exemption”.

Class discrimination faced by women of contemporary time

The Indian constitution clarifies the “rule of non-separation on the premise of rank or sexual orientation”. It promises the “privilege to life and to security of life”. Article 46 particularly “shields Dalit from social unfairness and all types of abuse”. Add to this the imperative Scheduled Castes and Tribes (Prevention of Atrocities) Act of 1989, and an around outfitted administrative armed force is framed. Notwithstanding, in view of “low levels of execution”, the UN expresses, “the procurements that secure ladies’ rights must be viewed as vacant of importance”. It is a commonplace Indian story: legal impassion (and cost, absence of access to lawful representation, interminable formality and obstructive staff), police defilement, and government arrangement, in addition to media lack of interest bringing on the significant hindrances to equity and the perception and implementation of the law.

Not at all like white collar class young ladies, Dalit assault casualties (whose numbers are developing) once in a while get the consideration of the rank/class-cognizant urban-driven media, whose essential concern is to advance a Bollywood gleaming, open-for-business picture of the nation.

A 20-year-old Dalit lady from the Santali tribal gathering in West Bengal was group assaulted, supposedly “on the requests of town senior citizens who questioned her relationship (which had been going ahead in mystery for a long time) with a man from an adjacent town in the Bird hum locale”. The savage occurrence happened while, as indicated by a BBC report, the man went to the lady’s home’ with the proposition of marriage, villagers spotted him and sorted out a kangaroo court. Amid the “procedures” the couple were made to sit with situation is anything but hopeful’ the headman of the lady’s town fined the couple 25,000 rupees (400 US dollars; GBP 240) for “the wrongdoing of experiencing passionate feelings for. The man paid, however the lady’s family were not able pay. Subsequently, the “headman” and 12 of his companions more than once assaulted her. Brutality, abuse and prohibition are utilized to keep Dalit ladies in a position of subordination and to keep up the patriarchal grasp on force all through Indian culture.

The urban areas are unsafe spots for ladies, yet it is in the farmland, where a great many people live (70 for each penny) that the best levels of misuse happen. Numerous living in country zones live in amazing neediness (800 million individuals in India live on under 2.50 dollars a day), with practically no entrance to medicinal services, poor instruction and horrifying or non-existent sanitation. It is a world separated from law based Delhi, or Westernized Mumbai: water, power, majority rule government and the tenet of law are yet to venture into the lives of the ladies in India’s towns, which home, Mahatma Gandhi broadly proclaimed, to the spirit of the nation.

Nothing unexpected, then, that following two many years of monetary development, India winds up moping 136th (of 186 nations) in the (sex fairness balanced) United Nations Human Development record’ Harsh thoughts of sexual orientation imbalance

Indian culture is isolated in numerous ways: position/class, sexual orientation, riches and neediness, and religion. Dug in patriarchy and sex divisions, which esteem young men over young ladies and keep men and ladies and young men and young ladies separated, join with tyke marriage to add to the formation of a general public in which sexual misuse and abuse of ladies, especially Dalit ladies, is an adequate piece of ordinary life.

Sociologically and mentally molded into division, schoolchildren separate themselves along sex lines; in numerous territories ladies sit on one side of transports, men another; unique ladies just carriages have been introduced on the Delhi and Mumbai metro, acquainted with shield ladies from inappropriate behavior or “eve teasing” as it is conversationally known. Such wellbeing measures, while being invited by ladies and ladies’ gatherings, don’t manage the basic reasons for misuse, and as it were may promote kindle them.

Assault, sexual brutality, attack and provocation are overflowing, at the same time, with the special case maybe of the Bollywood Mumbai set, sex is a forbidden subject. A survey by India Today directed in 2011 found that 25 for every penny of individuals had no complaint to sex before marriage, giving it’s not in their family.

Sociological partition energizes sex divisions, bolsters biased generalizations and feeds sexual constraint, which numerous ladies’ association trust represents the high rate of sexual viciousness. A recent report, did by the International Center for Research on Women, of men’s mentalities in India towards ladies created some startling measurements: one in four conceded having “utilized sexual brutality (against an accomplice or against any lady)”, one in five reported utilizing “sexual savagery against a stable [female] accomplice”. Half of men would prefer not to see sexual orientation correspondence, 80 for each penny respect evolving nappies, nourishing and washing youngsters to be “ladies’ work”, and a minor 16 for every penny have influence in family obligations. Added to these repressing states of mind of psyche, homophobia is the standard, with 92 for every penny admitting they would be embarrassed to have a gay companion, or even be in the region of a gay man.

With everything taken into account, India is cursed by an inventory of Victorian sex generalizations, fuelled by a position framework intended to oppress, which trap both men and ladies into molded cells of detachment where ruinous thoughts of sex are permitted to age, bringing about blasts of sexual brutality, misuse and man handle. Investigations of position have started to draw in with issues of rights, assets, and acknowledgment/representation, showing the degree to which position must be perceived as key to the account of India’s political advancement. For instance, researchers are getting to be progressively mindful of the degree to which radical masterminds.

Ambedkar, Periyar, and Phule requested the acknowledgment of histories of misuse, custom derision, and political disappointment as constituting the lives of the lower-ranks, even all things considered histories additionally framed the loaded past from which get away was looked for.

Researchers have indicated Mandal as the developmental minute in the “new” national governmental issues of station, particularly for having radicalized dalitbahujans in the politically critical states of the Hindi belt. Hence Mandal may be an advantageous, despite the fact that overdetermined vantage-indicate from which break down the state’s conflicting and incapable interest in the talk of lower-rank qualification, tossing open to examination the political practices and philosophies that enliven parliamentary vote based system in India as a recorded arrangement.

Tharu and Niranjana (1996) have noticed the perceivability of station also, sexual orientation issues in the post-Mandal connection and depict it as a opposing arrangement. Case in point, there were battles by upper-station ladies to challenge reservations by comprehension them as concessions, and the extensive scale investment of school going ladies in the counter Mandal tumult with a specific end goal to claim meet treatment instead of reservations in battles for sexual orientation equality. On the other hand, lower-position male declaration regularly focused on uppercaste ladies, making an uncertain problem for upper-rank women’s activists who had been professional Mandal. The relationship between standing and sexual orientation never appeared to be more cumbersome. The interest for bookings for ladies (and for further reservations for dalit ladies and ladies from the Backward Class and Other Backward Communities) can likewise be seen as an outgrowth of a restored endeavor to address rank and sex issues from inside the landscape of governmental issues. It may likewise demonstrate the inadequacy of concentrating exclusively on sexual orientation in assembling a measurable “arrangement” to the political issue of perceivability and representation.

Rising out of the 33 for each penny bookings for ladies in nearby Panchayat, and plainly inconsistent with the Mandal dissents that compared reservations with ideas of inadequacy, the late requests for reservations is a stamped move far from the verifiable doubt of bookings for ladies. As Mary John has contended, ladies’ powerlessness must be seen with regards to the political removals t h at imprint the emergence of minorities before the state.

The subject of political representation and the plan of gendered defenselessness are associated issues. As I have contended in my exposition incorporated into this volume, such defenselessness is the characteristic of the gendered subject’s peculiarity. It is that type of harmed presence that brings her inside the edge of political readability as various’yet qualified’for general types of review. All things considered, it is basic to political talks of rights and acknowledgment.

Political requests for bookings for ladies’and for lowercaste ladies’supplement academic endeavors to comprehend the profound cleavages between ladies of various positions that contemporary occasions, for example, Mandal or the Hindutva development have uncovered. In investigating the difficulties postured by Mandal to ruling originations of mainstream selfhood, Vivek Dhareshwar indicated conversions between perusing for and recouping the nearness of position as a hushed open talk in contemporary India, and comparable practices by women’s activists who had investigated the unacknowledged weight of gendered personality.

Dhareshwar recommended that scholars of station and scholars of sex may consider elective affinities in their strategies for examination, and deliberately grasp their trashed personalities (position, sexual orientation) with a specific end goal to attract open thoughtfulness regarding them as political characters. Dhareshwar contended this would demonstrate the degree to which secularism had been kept up as another type of upper-rank benefit, the extravagance of overlooking standing, rather than the requests for social equity by dalitbahujans who were requesting an open affirmation of such benefit.

Women and dalit considered the same

Untouchability and Dalit Ladies’ Oppression,” that “It remains a matter of reflection that the individuals who have been effectively required with arranging ladies experience troubles that are no place tended to in a hypothetical writing whose foundational standards are gotten from a sprinkling of standardizing hypotheses of rights, liberal political hypothesis, a not well educated left governmental issues and all the more as of late, every so often, even a well meaning convention of’entitlements.’ Malik in impact requests that how we are comprehend dalit ladies’ defenselessness.

Rank relations are implanted in dalit ladies’ significantly unequal access to assets of essential survival, for example, water and sanitation offices, and in addition to instructive foundations, open spots, and destinations of religious love. Then again, the material impoverishment of dalits and their political disappointment propagate the typical structures of untouchability, which legitimates upper-station sexual access to dalit ladies. Station relations are likewise changing, and new types of viciousness in autonomous India that objective images of dalit freedom such as the defilement of the statues of dalit pioneers, endeavor to counteract dalits’ socio-political progression by dispossessing land, or deny dalits of their political rights are gone for dalits’ apparent social versatility. These fresher types of brutality are regularly supplemented by the sexual harrassment and attack of dalit ladies, indicating the rank and gendered types of helplessness that dalit ladies experience.

As Gabriele Dietrich notes in her exposition “Dalit Movements and Women’s Movements,”* dalit ladies have been focuses of upper-position savagery. In the meantime, dalit ladies have likewise worked as the “property” of dalit men. Lowercaste men are likewise occupied with an unpredictable arrangement of dreams of requital that include the sexual infringement of upper-station ladies in striking back for their weakening by rank society. The risky organization of dalit ladies as sexual property in both occurrences overdetermines dalit ladies’ character in wording exclusively of their sexual accessibility.

Young ladies: Household Servants

At the point when a kid is conceived in most creating nations, companions and relatives shout congrats. A child implies protection. He will acquire his dad’s property and land a position to bolster the family. At the point when a young lady is conceived, the response is altogether different. A few ladies sob when they discover their infant is a young lady on the grounds that, to them, a girl is simply one more cost. Her place is in the home, not in the realm of men. In some parts of India, it’s conventional to welcome a family with an infant young lady by saying, “The worker of your family has been conceived.”

A young lady can’t resist the urge to feel second rate when everything around her advises her that she is worth not exactly a kid. Her character is fashioned when her family and society confine her chances and proclaim her to be inferior.

A blend of amazing neediness and profound inclinations against ladies makes a callous cycle of separation that keeps young ladies in creating nations from satisfying their maximum capacity. It additionally abandons them helpless against extreme physical and psychological mistreatment. These “hirelings of the family” come to acknowledge that life will never be any diverse.

Most prominent Obstacles Affecting Girls

Oppression young ladies and ladies in the creating scene is an overwhelming reality. It results in a huge number of individual tragedies, which signify lost potential for whole nations. Contemplates show there is an immediate connection between a nation’s disposition toward ladies and its encouraging socially and financially. The status of ladies is fundamental to the strength of a general public. On the off chance that one section endures, so does the entirety.

Grievously, female kids are most exposed against the injury of sexual orientation separation. The accompanying impediments are stark case of what young ladies overall face. However, the uplifting news is that new eras of young ladies speak to the most encouraging wellspring of progress for ladies’and men’in the creating scene today.

Endowment

In creating nations, the introduction of a young lady causes awesome change for poor families. At the point when there is scarcely enough nourishment to survive, any tyke puts a strain on a family’s assets. Be that as it may, the financial channel of a little girl feels considerably more serious, particularly in areas where endowment is drilled.

Endowment is merchandise and cash a lady of the hour’s family pays to the spouse’s family. Initially planned to help with marriage costs, share came to be seen as installment to the man of the hour’s family to take on the weight of another lady. In a few nations, endowments are indulgent, costing years of wages, and regularly tossing a lady’s family into obligation. The settlement hone makes the possibility of having a young lady considerably more offensive to poor families. It likewise puts young ladies in threat: another lady is helpless before her in-laws if they choose her settlement is too little. UNICEF gauges that around 5,000 Indian ladies are executed in settlement related occurrences every year.

Disregard

The creating scene is brimming with neediness stricken families who see their girls as a monetary problem. That state of mind has brought about the across the board disregard of child young ladies in Africa, Asia, and South America. In numerous groups, it’s a standard practice to breastfeed young ladies for a shorter time than young men so ladies can attempt to get pregnant again with a kid at the earliest opportunity. Subsequently, young ladies pass up a great opportunity for nurturing nourishment amid an essential window of their advancement, which hinder their development and debilitates their imperviousness to sickness.

Measurements demonstrate that the disregard proceeds as they grow up. Young ladies get less sustenance, medicinal services and less inoculations generally than young men. Very little changes as they get to be ladies. Convention calls for ladies to eat last, regularly decreased to picking over the scraps from the men and young men.

Child murder and Sex-Selective Abortion

In compelling cases, guardians settle on the terrible decision to end their infant young lady’s life. One lady named Lakshmi from Tamil Nadu, a ruined area of India, nourished her child sap from an oleander bramble blended with castor oil until the young lady seeped from the nose and kicked the bucket. “A little girl is dependably liabilities. By what method would I be able to raise a second?” said Lakshmi to disclose why she finished her child’s life. “Rather than her affliction the way I do, I thought it was ideal to dispose of her.”

Sex-specific premature births are much more regular than child murders in India. They are developing always visit as innovation makes it straightforward and shabby to decide an embryo’s sex. In Jaipur, a Western Indian city of 2 million individuals, 3,500 sex-decided premature births are completed each year. The sex proportion crosswise over India has dropped to an unnatural low of 927 females to 1,000 guys because of child murder and sex-based premature births.

China has its own particular long legacy of female child murder. In the most recent two decades, the administration’s notorious one-kid strategy has debilitated the nation’s reputation considerably more. By confining family unit size to restrict the populace, the approach gives guardians only one opportunity to create a desired child before being compelled to pay overwhelming fines for extra youngsters. In 1997, the World Health Organization proclaimed, “‘ more than 50 million ladies were evaluated to miss in China as a result of the standardized slaughtering and disregard of young ladies because of Beijing’s populace control program.” The Chinese government says that sex-specific premature birth is one noteworthy clarification for the amazing number of Chinese young ladies who have just vanished from the populace in the most recent 20 years.

Misuse

Indeed, even after outset, the risk of physical mischief takes after young ladies for the duration of their lives. Ladies in each general public are helpless against misuse. Be that as it may, the danger is more extreme for young ladies and ladies who live in social orders where ladies’ rights mean for all intents and purposes nothing. Moms who do not have their own particular rights have little assurance to offer their girls, a great deal less themselves, from male relatives and other power figures. The recurrence of assault and vicious assaults against ladies in the creating scene is disturbing. Forty-five percent of Ethiopian ladies say that they have been struck in their lifetimes. In 1998, 48 percent of Palestinian ladies confessed to being manhandled by a personal accomplice inside the previous year.

In some societies, the physical and mental injury of assault is aggravated by an extra shame. In societies that keep up strict sexual codes for ladies, if a lady ventures too far out’by picking her own significant other, being a tease in broad daylight, or looking for separation from an injurious accomplice’she has conveyed disrespect to her family and must be restrained. Regularly, teach implies execution. Families submit “honor killings” to rescue their notoriety polluted by defiant ladies.

Shockingly, this “insubordination” incorporates assault. In 1999, a 16-year-old rationally disabled young lady in Pakistan who had been assaulted was brought before her tribe’s legal guidance. Despite the fact that she was the casualty and her aggressor had been captured, the guidance chose she had conveyed disgrace to the tribe and requested her open execution. This case, which got a ton of reputation at the time, is not uncommon. Three ladies succumb to respect killings in Pakistan consistently’including casualties of assault. In zones of Asia, the Middle East, and even Europe, all obligation regarding sexual wrongdoing falls, as a matter of course, to ladies.

Work

For the young ladies who get away from these pitfalls and grow up moderately securely, day by day life is still unfathomably hard. School may be a possibility for a couple of years, however most young ladies are hauled out at age 9 or 10 when they’re sufficiently helpful to work throughout the day at home. Nine million a bigger number of young ladies than young men pass up a major opportunity for school each year, as indicated by UNICEF. While their siblings keep on going to classes or seek after their leisure activities and play, they join the ladies to do the main part of the housework.

Housework in creating nations comprises of persistent, troublesome physical work. A young lady is prone to work from before dawn until the light depletes away. She strolls unshod long separations a few times each day conveying overwhelming pails of water, undoubtedly contaminated, just to keep her family alive. She cleans, grinds corn, accumulates fuel, tends to the fields, washes her more youthful kin, and gets ready suppers until she takes a seat to her own after every one of the men in the family have eaten. Most families can’t manage the cost of current machines, so her undertakings must be finished by hand’squashing corn into dinner with substantial rocks, cleaning clothing against harsh stones, plying bread and cooking gruel over a rankling open flame. There is no time left in the day to figure out how to peruse and compose or to play with companions. She falls depleted every night, prepared to get up the following morning to begin another long workday.

The greater part of this work is performed without acknowledgment or prize. UN measurements demonstrate that despite the fact that ladies create a large portion of the world’s sustenance, they possess just 1 percent of its farmland. In most African and Asian nations, ladies’ work isn’t viewed as genuine work. Should a lady accept an occupation, she is relied upon to keep up every one of her obligations at home notwithstanding her new ones, with no additional assistance. Ladies’ work goes neglected, despite the fact that it is urgent to the survival of every family.

Sex Trafficking

A few families choose it’s more lucrative to send their girls to a close-by town or city to land positions that more often than not include hard work and little pay. That urgent requirement for money leaves young ladies simple prey to sex traffickers, especially in Southeast Asia, where universal tourism pigs out the illicit business. In Thailand, the sex exchange has swelled without register with a primary part of the national economy. Families in little towns along the Chinese fringe are consistently drawn nearer by scouts called “close relatives” who request their girls in return for a long time’s wages. Most Thai agriculturists win just $150 a year. The offer can be excessively enticing, making it impossible to can’t.

essay-2016-06-15-000BHg

Would it be moral to legalise Euthanasia in the UK?: essay help online

The word ‘morality’ seems to be used in both descriptive and normative meanings. More particularly, the term “morality” can be used either (Stanford Encyclopaedia of Philosophy https://plato.stanford.edu/entries/morality-definition

1. descriptively: referring to codes of conduct advocated by a society or a sub-group (e.g. a religion or social group), or adopted by an individual to justify their own beliefs,

or

2. normatively: describing codes of conduct that in specified conditions, should be accepted by all rational members of the group being considered.

Examination of ethical theories applied to Euthanasia

Thomas Aquinas’ natural law considered that morally beneficial actions and the goodness of those actions is assessed against eternal law as a reference point. Eternal law, in his view, is a higher authority and the process of reasoning defines the differences between right and wrong. Natural law thinking is not just concerned with focussed aspects, but considers the whole person and their infinite future. Aquinas would have linked this to God’s predetermined plan for that individual and heaven. The morality of Catholic belief is heavily influenced by natural law. Primary precepts should be considered when considering issues involving euthanasia particularly important key precepts to do good and oppose evil and to preserve life upholding the sanctity of life. Divine law set out in the Bible states that we are created in God’s image and held together by God from our time in the womb. The Catholic Church’s teachings on euthanasia maintain that euthanasia is wrong (Pastoral Constitution, Gaudium et Spes no. 27, 1965) as life is sacred and God-given. (Declaration on Euthanasia 1980). This view can be seen to be just as strongly held and applied today in the very recent case of Alfie Evans where papal intervention in the case was significant and public. Terminating life through euthanasia goes against divine law. Ending life and the possibility of that life bringing love into the world or love coming into the world in response to the person euthanised is wrong. To take a life by euthanasia, according to catholic belief, rejects God’s plan for that individual to live their life. Suicide or intentionally ending life is an equal wrong to murder and as such is to be considered rejection is God’s loving plan (Declaration on euthanasia, 1.3, 1980).

The Catholic Church interprets natural law to mean euthanasia is wrong and that those involved in it are committing a wrongful and sinful act. Whilst the objectives of euthanasia may appear to be good in that they seek to ease suffering and pain they are in fact failing to recognise the greater good of the sanctity of life within God is greater plan and include people other and the person suffering and eternal life in heaven

The conclusions of natural law consider the position of life in general and not just the ending of a single life. An example would be that if euthanasia is lawful older people could become fearful of admission to hospital in case they were drawn into euthanasia. It could also lead to people being attracted to euthanasia at times when they were depressed. This can be seen to attack the principles of living well together in society as good people could be hurt. It also makes some predictions on the slippery slope and floodgates type arguments about hypothetical situations. Euthanasia therefore clearly undermines some primary precepts.

Catholicism accepts the disproportionately onerous treatment is not appropriate towards the end of a person’s life and gives a moral obligation not to strenuously keep a person alive at all costs. An example of this would be the terminally ill cancer patient deciding not to accept further chemotherapy or radiotherapy which could extend their life, but at great cost to quality of that remaining life. Natural law does not seem to prevent them from making these kinds of choices.

There is a doctrine of double effect an example being palliative care with the relief of pain and distress as the objective might have a secondary effect of ending life earlier than if more active treatment options had been pursued. The motivation is not to kill, but rather to ease pain and distress. An example of this is when an individual doctor’s decision to increase opiate drug dosage to the point where respiratory arrest occurs almost inevitably but at all times the intended motivation is the easing of pain and distress. This has on various occasions been upheld as being legally and morally acceptable by the courts and medical watchdogs such as the GMC (General Medical Council).

The catechism of the Catholic Church accepts this and view such decisions as best made by the patient if competent and able and if not by those legally and professionally entitled to act for the individual concerned.

There are other circumstances when the person involved in the process might not be the same type of person as is assumed by natural law. For example, someone with severe brain damage and in a persistent coma or “brain-dead”. In these situations, they may not possess the defining characteristics of a person. This could form justification for euthanasia. The doctors or relatives caring for such a patient may have conflicts of conscience by being unable to show compassion to another and thereby prolong suffering, not only of the patient, but of those surrounding them.

In his book Morals and Medicine published in 1954, Fletcher, the president of the euthanasia Society of America argued that there were no absolute standards of morality in medical treatment and that good ethics demand consideration of patient’s condition and the situation surrounding it.

Fletcher Situation Ethics avoids legalistic consideration of moral decisions. It is anchored only actual situations and specifically in unconditional love for the care of others. When considering euthanasia with this approach it will always “depend upon the situation”.

From the view point of an absolutist, morality is innate from birth. It can be argued that natural law does not change as a result of personal opinions; remaining never changed. Natural law is a positive view with regard to morality as it can be seen to allow people from ranging backgrounds, classes and situations to have sustainable moral laws to follow.

Religious believers also follow the principles of Natural Law as the underlying theology of the law argues the idea that morality remains the same and never changes with an individual’s personal opinions or decisions. Christianity as a religion, has great support amongst its religious believers for there being a natural law of morality. Christian understanding behind this concept has been largely shown to have come as a result of Thomas Aquinas- following his teaching of the close connection of faith and reason being closely related arguments for there being a natural law of morality.

Natural Law has been shown over time to have compelling arguments, one of which being its all-inclusiveness and fixed stature- a contrast to the relative approach to morality. Natural law is objective and is consequently abiding and eternal. It is considered to be within us/innate and is seen to occur as a mixture of faith and reason to go on the form an intelligent and rational being who is faithful in belief of God. Natural law is a part of human nature, commencing from the beginning of our lives when we gain our sense of right and wrong.

However, there are also many disadvantages of natural law with regard to resolving moral problems. They can include, the fact that they are not always self-evident (proving). We are unable to confirm whether there is only one global purpose for humanity. It can be argued that even if humanity had a purpose for its existence, this purpose cannot be seen as self-evident. The perception of natural beings and things is forced to change over generations due to different perceptions, with forms of different times being more fitting with the present culture. It can therefore be argued that absolute morality is changed and altered by cultural beliefs of right and wrong. Some things later on in time being perceived as wrong, leading on to believe that defining what is natural is almost impossible as moral decisions are ever changing. The thought of actuality being better that potentiality, cannot easily transfer to practical ethics. The future holds many potential outcomes, however some of these potential outcomes are ‘wrong’. (Hodder Education, 2016)

Natural law being the best way to resolve moral problems holds a strong argument, however its strict formation means that there is some confusion as to what is right and wrong in certain situations. These views are instead formed by society- not always following the natural law of morality. Darwin’s Theory of Evolution put forward in On The Origin of the Species in 1859, challenged natural law as he put forward the notion that living things strive for survival (survival of the fittest) and supporting his theory of evolution by natural selection. It can be argued that moral problems being solved by natural law may be possible, but not necessarily the best solution.

For many years, euthanasia has been a controversial debate across the globe with different people taking opposing sides and arguing in support of their opinions. Ideally, it is the act of allowing an individual to die in a painless manner by suppressing their medication. Often, these are classified in different forms such as voluntary, involuntary and non-voluntary. However, the legal system has been actively involved in this debate. A major concern put forward is that legalizing any form of euthanasia may lead to slippery slope principle, which holds that permission of anything comparatively harmless today, may begin a trend that results in unacceptable practices. Although one of the popular stands argues voluntary euthanasia is morally acceptable while non-voluntary euthanasia is always wrong, the legal constitution has been split in their decisions in various instances. (Oxford for OCR Religious Studies, 2016)

Voluntary euthanasia is defined by the killing of an individual upon their approval through various ways. The arguments that voluntary euthanasia is morally acceptable are drawn from the expressed desires of a patient. As far as the respect for an individual’s decision does not harm other people, then it is morally correct. Since individuals have the right to make personal choices about their lives, their decisions on how they should die should also be respected. Most, importantly, at times, it remains the only option of assuring the well-being of the patient especially if they are suffering incessant and severe pain. Despite these claims, several cases have emerged, but the court has continued to refuse to uphold the morality of euthanasia irrespective of a victim’s consent. One of these is the case of Diane Pretty who suffered from motor neuron disease. Since she was afraid of dying by choking/aspiration, a common end of life event experienced by many motor neurone disease victims. She sought to have legal assurance that her husband would be free from the threat of prosecution if he assisted her to end her life. Her case went through the Court of Appeal, The House of Lords (the Supreme Court in today’s system) and the European Court of Human Rights. However, due to the concerns raised under the slippery slope principle, the judges denied her request, and she lost the case.

There have been many legal and legislative battles attempting to change the law to support voluntary Euthanasia in varying circumstances. Between 2002 and 2006 Lord Joel Joffe (a Patron of the Dignity in Dying organisation) fought to change the law in the UK to support assisted dying. His first Assisted Dying (Patient) Bill continued to the stage of a second reading (June 2003) however surpassed the time limit to progress to the committee stage. However, Joffe persisted and in 2004 restated his plight with the Assisted Dying for the Terminally Ill Bill which progressed further to the earlier bill to make it to the committee stage in 2006. The committee stated: “In the event that another bill of this nature should be introduced into Parliament, it should, following a formal Second Reading, be sent to a committee of the whole House for examination”. However, unfortunately in May 2006 an amendment at the Second reading lead to the collapse of the bill. This was a surprise to Joffe, with the majority of the select committee on board with the bill. In addition to this calls for a statute supporting voluntary euthanasia have increased and this can be evidenced by the significant numbers of people in recent years travelling to Switzerland where physician assisted suicide is legal under permitted circumstances. Lord Joffe expressed these thoughts in an article written for the campaign for Dignity In Dying cause in 2014 shortly before his death in 2017 in support of Lord Falconer’s Assisted Dying Bill which was a Bill which proposed to permit the “terminally ill, mentally competent adults to have an assisted death after being approved by doctors” (Falconer’s Assisted Dying Bill, Dignity in Dying, 2014). The journey of this bill was followed by the following referenced documentary.

The BBC documentary ‘How to Die: Simon’s Choice’ followed the decline of Simon Binner from motor neurone disease and his subsequent plight for an assisted death. The documentary followed his journey to Switzerland for a legal assisted death and documented the reactions of his surrounding family. During filming of the documentary, a legal bill was being debated in parliament proposing to legalise assisted dying in the United Kingdom. The bill proposed a new law (The Lord Falconers Assisted Dying Bill) which would allow a person to request a lethal injection if they had less that six months left to live, this raised a myriad of issues including precisely defining a life term whereby one has more or less that six months left to live. The Archbishop of Canterbury, Justin Welby urged MP’s to reject the bill stating that Britain would be crossing a ‘legal and ethical Rubicon’ if parliament were to vote to allow the terminally ill to actively be assisted to die at home in the UK under medical supervision. The leaders of the British Jewish, Muslim, Sikh and Christian religious communities wrote a joint open letter to all members of the British parliament urging them to oppose the bill to legalise assisted dying. (The Guardian, 2015). After announcing his death on LinkedIn, Simon Binner died at an assisted dying clinic in Switzerland. The passing of this bill may have been the only way of helping Simon Binner in his home country, although assisted dying was ruled to be unlawful. (Deacon, 2016)

The result of the private members bill, originally proposed by Rob Marris (a Labour MP from Wolverhampton) ended in defeat in 330 MPs against and 118 MPs in favour. (The Financial Times, 2015)

The 1961 Suicide Act (Legislation, 1961) decriminalised suicide, however it didn’t make it morally licit. It outlines that a person who aids, abets, counsels or procures suicide of another/attempt by another to commit suicide shall be liable to be sentenced to a prison term of up to 14 years. It also provided for the situation of a defendant on trial on indictment for murder/manslaughter it is proved that the accused aided, abetted, counselled or procured the suicide of the person in question, the jury could find them guilty of that offence as an alternative verdict.

Many took that the view that the law supports principle of autonomy, but the act was used to reinforce the sanctity of life principle by criminalising any form of assisted suicide. Although the act doesn’t hold the position that all life is equally valuable, there have been cases when allowing a person to die would be the better solution.

In the case of non-voluntary euthanasia, patients are often incapable of giving their approval for death to be induced. It mostly occurs if a patient is either very young, mentally retarded, has an extreme brain damage, or is in a coma. Opponents argue that human life should be respected and in this case, it is even worse because the victim’s wishes are not factored when making decisions to end their life. As a result, it becomes morally wrong irrespective of the conditions that they face. In such a case, all parties involved should wait for a natural death while at the same time according the patient the best palliative medical attention possible. The case of Terri Schiavo who was suffering from bulimia and with an extremely damaged brain falls under this argument. The ruling of the court allowing the request of her husband to have her life terminated triggered heated debates with some arguing that it was wrong while others saw it as a relief since she had spent more than half of her life unresponsive.

I completed primary research in order to support my findings as to whether it would be moral or not to legalise Euthanasia in the UK. With regard to the having an understanding of the correct definition of Euthanasia nine out of ten people who took part in the questionnaire selected the correct definition of physician-assisted suicide being “The voluntary termination of one’s life by administration of a lethal substance with the direct or indirect assistance of a physician” (Medicanet, 2017). The one person who selected the wrong definition believed it to be “The involuntary termination of one’s own life by administration of a lethal substance with the direct or indirect assistance of a physician. The third definition on the questionnaire stated that physician assisted suicide was “The voluntary termination of one’s own life by committing suicide without the help of others”- this definition is the ‘obvious’ incorrect answer and no participant in the questionnaire selected this answer.

The morality of the young should be followed. From the results of my primary research completed by a selected youth audience seventy percent were in agreement that people should have the right to choose when they die. However only twenty percent of this targeted audience were in agreement that they would assist a friend or family member in helping them die. This drop in support can be supported by the fear that prosecution brings of a possible fourteen year imprisonment for assisting in a person’s death.

The effect of the Debbie Purdy case (2009), was that guidelines were established by the Director of Public Prosecutions in England and Wales (Dying or assisted dying isn’t illegal in Scotland however there is no legal way to medically access it). These guidelines were established according to the Director of Public Prosecutions to “clarify what his position is as to the factors that he regards as relevant for and against prosecution” (DID Prosecution Policy, 2010). The guidance policy outlines ‘more likely’ factors as to when prosecution should take place; for prosecution of an assistor the policy outlined that if they had a history of violent behaviour, didn’t know the person, received a financial gain from the act or acted as a medical professional then they were more likely to face prosecution. However despite these factors the policy stated that police and prosecutors of the case should examine any financial gain with a ‘common sense’ approach as many financially benefit from the loss of a loved one, however the fact that they were a close relative being relieved of pain for example should be a larger factor behind assisting someone to die, to be considered in case of prosecution.

Arguments that state voluntary euthanasia is morally right while involuntary euthanasia is wrong, remains as being one of the most controversial issues even in the modern society. It is even more significant because even the legal systems remain split in their ruling in the various cases such as those cited. Based on the slippery slope argument, care should be taken when determining what is morally right and wrong because of the sanctity of human life. Many consider that the law has led to considerable confusion and that one way of developing the present situation is to create a new Act which permitting physician assisted dying, with the proposal stating that there should be a bill to “enable a competent adult who is suffering unbearably as a result of a terminal illness to receive medical assistance to die at his own considered/persistent request… to make provision for a person suffering from a terminal illness to receive pain relief medication” (Assisted Dying for the Terminally ill Bill, 2004).

There is a major moral objection to voluntary euthanasia under the reasoning of the “slippery slope” argument: the fear that what begins as legitimate reasons to assist in a person’s death will also permit death in other illegal circumstances.

In a Letter addressed to The Times newspaper (24/8/04), John Haldane and Alasdair MacIntyre along with other academics, lawyers and philosophers, suggested that any supporters of the Bill change from making the condition one of actual unbearable suffering from terminal illness to merely the fear, discomfort and loss of dignity which terminal illness might bring. In addition, there is an issue of if quality of life is grounds for euthanasia from those who request it therefore it must be open to those who don’t request it or are unable to request it therefore presenting the issue of a slippery slope. Also in the letter addressed to The Times, the esteemed academics referenced Euthanasia in the Netherlands where it is legal. The purpose of this was to infer that many people have dies against their desire due to safeguarding issues. (Hodder Education, 2016)

In conclusion, upon considering different morality arguments on both sides of the debate, I concluded that the two forms of morality (Natural Law and Situation Ethics) would give two opposing responses in relation to the question.

From the viewpoint of a deontologist who would be guided by natural law, duty and obligation arguably from a religion would lead a society to decide that it would be wrong to legalise Euthanasia. However, from the viewpoint of a situational ethicist whose viewpoint would be changeable depending on the independent situation could support the plight to legalise Voluntary Euthanasia in the UK under guidelines to account for differing situations.

After completing my Primary and Secondary Research, considering the passage of many unsuccessful bills put through parliament to legalise euthanasia and many case studies including the moving account of Simon Binner’s fight to die, my own view rests on the side of a situational ethicist who would believe that depending to the independent situation people should be able to have the right to die in their own country by legalising voluntary euthanasia, rather than being forced to travel abroad to access a legal form of voluntary euthanasia and risk their loved ones being prosecuted on their return to the UK for assisting them.

The slippery slope argument does not help those in particular individual situations and it must surely be wrong to shy away from making difficult decisions on the grounds that an individual should sustain prolonged suffering in order to protect society from the possible extended over use of any legalisation. In practice over the past half century some sort of euthanasia has been going on in the UK when doctors give obvious over-dosage of opiates in terminal cases, but have been shielded from the legal consequences by an almost fictional notion that as long as the motivation was to ease and control pain then the inevitable consequence of respiratory arrest (respiratory suppression is a side effect of morphine type drugs), then the action was lawful.

Discredited and now defunct Liverpool Care Pathway for the Dying Patient (LCP) was an administrative tool used as an attempt to assist UK healthcare professionals to manage the care pathway and deciding palliative care options for patients at the very end of life. As with many such tick the-box-exercises individual discretion is restricted in an attempt to standardise practice nationally (Wales was excluded from the LPA). The biggest problem with the LPA (which attracted much adverse media attention and public concern in 2012) was that most patients or their families were not consulted when they were placed on the pathway. It had options for withdrawing active treatment whilst managing distressing symptoms actively. However, removing intravenous hydration/feeding by regarding it as active treatment would inevitably lead to death in a relatively short period of time making the decision to place a patient on the LPA because they were at the end of life a self-fulfilling prophesy. (Liverpool Care Pathway)

There is a chilling consideration of cost of provision of “just in case” boxes at approximately £25 in the last part of this lengthy document should be part of the process of considering what to advise professionals may seem alarming to some. However there is a moral factor in the financial implications of unnecessarily prolonging human life. Should the greater good be considered when deciding to actively permit formal pathways to euthanasia or to take steps to prohibit it (the crimes of murder or assisting suicide). In the recent highly publicised case of Alfie Evans enormous financial resources were used to keep a child with a terminal degenerative neurological disease alive on a paediatric intensive care unit at Alder Hay hospital in Liverpool for around a year. In deciding to do this it is inevitable that those resources were unavailable to treat others who might have gone on to survive and live a life. Huge sums of money were spent both on medical resources and lawyers. The case became a highly media publicised circus resulting in ugly threats made against medical staff at the hospital concerned. There was international intervention in the case by the Vatican and Italy (granting of Italian nationality to the child). Whist the emotional turmoil of the parents was tragic and the case very sad was it moral that their own beliefs and lack of understanding of the medical issues involved should lead to such a diversion of resources and such terrible effects on those caring for the boy?

(NICE (National Institute of Clinical Excellence) guidelines, 2015)

The General Medical Council (GMC) governs the licensing and professional conduct of doctors in the UK. They have produced guidance for doctors regarding the medical role at the end of life Treatment and care towards the end of life: good practice in decision making. It gives comprehensive advice on some of the fundamental issues dealing with the end of life treatment and it covers issues such as living wills (where withdrawal of treatment requests can be set out in writing and in advance). These are binding both professionally, but as ever there are some caveats regarding withdrawal of life prolonging treatment.

It also sets out presumptions of a duty to prolong life and of a patient’s capacity to make decisions along established legal and ethical viewpoints. I particular it is stated that “decisions concerning life prolonging treatments must not be motivated by a desire to bring about a patient’s death” (Good Medical Practice, GMC Guidance to Doctors, 2014)

Formally the Hippocratic Oath was sworn by all doctors and set out a sound basis for moral decision making and professional conduct. In modern translation from the original ancient Greek it states with regard to medical treatment that a doctor should never treat “….. with a view to injury and wrong-doing. Neither will [a doctor] administer a poison to anybody when asked to do so, nor will [a doctor] suggest such a course. Doctors in the UK do not swear the oath today, but most of its principles are internationally accepted except perhaps in the controversial areas surrounding abortion and end of life care.

(Hippocratic Oath, Medicanet)

In conclusion, upon considering different morality arguments on both sides of the debate, I concluded that the two forms of morality (Natural Law and Situation Ethics) would give two opposing responses in relation to the question.

From the viewpoint of a deontologist who would be guided by natural law, duty and obligation arguably from a religion would lead a society to decide that it would be wrong to legalise Euthanasia. However, from the viewpoint of a situational ethicist whose viewpoint would be changeable depending on the independent situation could support the plight to legalise Voluntary Euthanasia in the UK under guidelines to account for differing situations.

After completing my Primary and Secondary Research, considering the passage of many unsuccessful bills put through parliament to legalise euthanasia and many case studies including the moving account of Simon Binner’s fight to die, my own view rests on the side of a situational ethicist who would believe that depending to the independent situation people should be able to have the right to die in their own country by legalising voluntary euthanasia, rather than being forced to travel abroad to access a legal form of voluntary euthanasia and risk their loved ones being prosecuted on their return to the UK for assisting them.

At the end of the day, much of the management of the end of life of patients is not determined by the stipulations laid out by committees in lengthy documents, but by the individual treatment decisions made by individual doctors and nurses who are almost always acting in the best interests of patients and their families. The methodology of accelerating the inevitable event by medication or withdrawal of treatment is almost impossible to standardise across a hospital or local community care setup, let alone a country. It may be a better way to continue the practice of centuries and let the morality and conscience of the treating professions determine what happens and keep the formal moral, religious and legal factors involved in such areas in the shadows.

2018-5-4-1525394652

Has the cost of R & D impacted vaccine development for Covid-19?

Introduction

This report will be investigating and trying to answer the question of: ‘To what extent have the cost requirements of R&D, structure of the industry and government subsidy affected firms in the pharmaceutical industry in developing vaccines for Covid-19?’. The past two years have been very unpredictable for the pharmaceutical industry regarding the breakout of the COVID-19 pandemic. Despite the fact that the pharmaceutical industry has made major contributions to human wellbeing with regards to the reduction of suffering and ill health for over a century, the industry still remains one of the least trusted industries based on public opinions. It is even often compared to the nuclear industry in terms of trustworthiness. Despite being one of the riskiest industries to invest money into, governments have subsidised billions into the production of the COVID-19 vaccines. Regardless of the fact of the associated risks that come with pharmaceuticals, a big part of the public still thinks pharmaceuticals should continue to be produced and developed in order to provide the correct treatment to those with existing health issues (Taylor, 2015). This along with further aspects affecting the requirements of R&D, structure of the industry and government subsidy and how these have affected firms in the pharmaceutical industry with regards to the development of the COVID-19 vaccines will be discussed further in the report.

The Costs of R&D

Back in 2019, $83 billion was spent on R&D. That figure alone is roughly 10 times greater than what the industry spent on R&D in the 1980s. Most of this amount was dedicated to testing and discovering new drugs and clinical testing with regards to safety of the drug. In 2019 drug companies dedicated a quarter of their annual income to R&D which is also an increase of almost double since the early 2000s.

(Pharmaceutical R&D Expenditure Shows Significant Growth, 2019)

Usually the amount spent on R&D of a new drug by drug companies is based on the financial return they expect to make, any policies influencing the supply and demand for drugs and the cost of developing these drugs.

Most drugs that have been approved recently have been specialty drugs. These are drugs that typically treat issues such as complex, chronic or rare conditions and can require patient monitoring. However, specialty drugs are very expensive to develop, pricey for the customer and hard to remake (Research and Development in the Pharmaceutical Industry, 2021).

Government subsidies for the COVID-19 vaccines

There are two main ways in which a federal government can have a direct impact in supporting vaccine development. This is either done by making a promise to purchase a successful vaccine in advance once the firm has successfully achieved its specified goal with the vaccine, or they can cover any costs associated with the R&D of the vaccine.

(Which Companies Received The Most Covid-19 Vaccine R&D Funding?, 2021)

The Department of Health and Human Services in the month of May 2020, launched ‘Operation Warp Speed’. This was a collaborative project in which the FDA, the Department of Defence, the National Institutes of Health and the Centre for Disease Control and Prevention all worked together to provide funding for the COVID-19 vaccine development. Through ‘Operation Warp Speed’, more than $19 billion was provided in funding by the federal government to help seven different private pharmaceutical manufacturers in the development and research of COVID-19 vaccines. A further five out of seven of those went on to accept further funding in order to help these companies boost the production capabilities of the vaccines. Later a sixth company accepted funding in order to help boost the production of another company’s vaccines as they received authorization for emergency use. Then six of the seven also made a deal for an advance purchase. Two of these companies received additional funding as they sold more doses than they expected to during the advance purchase agreements, in order for them to develop even more vaccines to distribute. Due to the simultaneous execution of combining numerous stages of development that in normal cases would be developed in consecutive order, it allowed pharmaceutical manufacturers to reach their end goal and manufacture vaccines at a rate a lot higher than normal when it comes to vaccines. This was done due to the urgency of a solution to the COVID-19 pandemic, as it was starting to cause public uproar and panic amongst nations. As soon as the first COVID-19 diagnoses was made in the US, two vaccines were already at Phase III clinical trials, and this is immensely quick, as it would usually take around a few years of research in order to reach Phase III in clinical trials for a vaccine. The World Health Organisation claims that there were already over 200 COVID-19 vaccine development candidates in the time period of February 2021 (Research and Development in the Pharmaceutical Industry, 2021).

(Research and Development in the Pharmaceutical Industry, 2021)

The image above shows what vaccines were at which stage of development during what time period. This shows the urgency that was there in order to develop and produce these vaccines to fight the outbreak of the coronavirus. Without these government subsidies, firms would have been nowhere near completing the research and development needed in order to produce numerous COVID-19 vaccines. This shows the importance that government subsidies have on the pharmaceutical industry and the development of new drugs and vaccines.

Impact of the structure of the pharmaceutical industry on vaccine development

When it came to the development of the COVID-19 vaccines, many different names in the pharmaceutical industry took part. Now as far as the majority of society is concerned, the pharmaceutical industry is just a small group of large multinational corporations such as GlaxoSmithKline, Novartis, AstraZeneca, Pfizer and Roche. These are frowned upon by the public as they are stereotyped to be the ‘Big Pharma’ and so they can be misleading. Many people have their if’s and doubts about these big multinational corporations especially when they have such an influence on their health and the drugs they develop. It becomes hard for the public to rely and trust these companies because at the end of the day it is their health that they are trusting these companies with. So therefore it is logical that a lot of people will have had and still do have their suspicions about the COVID-19 vaccines developed by a handful of these companies. If you were to ask someone whether or not they have ever heard of companies like Mylan or Teva, they would probably have no clue about them even though Teva is the world’s 11th biggest pharmaceutical company and probably produces the medicine that these people take on a regular basis. The fact that over 90% of pharmaceutical companies are basically almost invisible to the general public obviously means that when it does become known to the public who has manufactured a medicine they are considering taking, for example the Pfizer vaccine, people are going to be careful and suspicious about taking this vaccine as they have probably never heard of the company Pfizer before. All this, despite it being that these companies are responsible for producing a majority of the medicines that everyone takes.

Most new drugs that are produced never even make it onto the market as the drug is found to not work or to have serious side effects, making it unethical to use on patients. However, the small percentage of drugs that do make it onto the market are patented, meaning that the original manufacturer only holds temporary rights to sell the product. Once this has expired, the pharmaceutical is free to sell and manufacture by anyone, meaning it is now a generic pharmaceutical (Taylor, 2015).

This again does not help research pharmaceutical companies, as their developments which are now out of patent, are just being sold by generic pharmaceutical companies where everyone goes to buy their pharmaceuticals. This means generic pharmaceutical companies basically never have a failed product and the research companies are barely able to create a successful product to make it out onto the market. This again causes the public to not even know that the majority of drugs they buy come from these research companies and are not originally procured by the generic pharmaceutical company they buy them from.

As seen with the COVID-19 vaccines, this caused a lot of uncertainty and distress amongst the public as most people had never even heard of companies like ‘Pfizer’ or ‘AstraZeneca’. This in turn made it more difficult for pharmaceutical companies to successfully manufacture and sell their vaccine, prolonging the whole vaccination process.

Due to this structure of the pharmaceutical industry, it has affected firms greatly in their ability to successfully and reliably manufacture vaccines against COVID-19.

Conclusion

Looking at the three factors combined: cost requirements of R&D, structure of the industry and government subsidy, it is clear that these have all had a great impact in the development of the COVID-19 vaccines. The costs associated with R&D in the development of the COVID-19 vaccines, essentially determines how successful the vaccines would be and whether or not they would have enough to first of all do the needed research and then to finally produce and sell them. Without the large number of costs that go into the development of vaccines and other drugs, the COVID-19 vaccines will have never been able to be manufactured and sold. This will have left the world in even more panic and uproar than it was/is. If this would’ve happened, it can easily have a ripple effect on economies, social factors and maybe even potentially other factors such as environmental factors.

One of the biggest impacts on the successful manufacturing and sale of the vaccines was to do with the structure of the industry. With big research pharmaceutical companies putting in all the work and effort to develop these COVID-19 vaccines but with most of the general public not ever even having heard of them before, it made it very hard for pharmaceutical companies to come across as reliable. People didn’t trust the vaccines as they had never heard of the company who developed it, such as Pfizer. This caused debate and protest against these vaccines, making it harder for companies to produce and successfully sell their vaccines to the public who were in need of them and demanded them. This was due to one major flaw in the pharmaceutical industry, which is the fact that companies such as Pfizer and AstraZeneca are kept under the rug and are barely even known by the public as all their products are just taken and sold on by generic pharmaceutical companies where people can buy them from. It also has to do with the fact that research pharmaceutical companies specialise in advanced drugs and not in more generic drugs which are more likely to be successful as they are easier to develop. So naturally the lack of successful products produced will reflect negatively on these companies although the one product they do successfully produce will also be frowned upon due to its previously non viable products.

Then finally, probably the second or joint most important factor is government subsidies. It is quite clear that without the correct government funding and without ‘Operation Warp Speed’ we’d still be in the process of trying to develop even the first COVID-19 vaccine as there will have been nowhere near enough funding for the R&D of the vaccines. This would’ve resulted in the death rate of coronavirus infections to spike, and will have probably put the economy on a complete standstill putting a large number of people out of work. All of this has numerous ripple effects, as just the one issue of loss of work could spike the poverty rate immensely, leaving economies broken. So overall, these three factors have had a huge impact on firms in the pharmaceutical industry in developing the COVID-19 vaccines.

2022-1-5-1641412725

Gender in Design: essay help free

Gender has always had a dominant place in design. Kirkham and Attfield in their 1996 book, The Gendered Object, set out that in their view that there are attributable genders which seem to be unconsciously attached to some objects as the norm. Making the distinction between how gender is viewed in modern day design compared to twenty plus years ago is now radically different in that there is now recognition of this normalization. Having international companies recognise this change and adapt their brands and companies to relate to this modern day approach influences designers like myself to keep up to date and affect my own work.

When designing there is Gender system some people tend to follow very strictly, the system is a guide that works with values that reveals the gender formation in mankind. In the gender system you have binary opposition which takes action in colour, size, feeling and shape, for example pink/blue, small/large, smooth/rough and organic/geometric. Without even thinking the words give off synonyms of male or female without even putting them in context. Gender’s definition is traditionally Male or Female but modern day brands are challenging and pushing these established boundaries. They don’t think they should be restrictive or prescriptive as they have been in the past. Kirkham and Attfield challenge this by comparing perceptions in the early twentieth century illustrating that the societal norms were the opposite to what we are now made to believe by gender norms. A good example of this is the crude binary opposition implicit in ‘pink for a little girl and blue for a boy’ was only established in the 1930’s; babies and parents managed perfectly well without such colour coding before then. Today through marketing and product targeting these ‘definitions’ are even more widely used in the design and marketing of children clothes and objects than a few years ago. Importantly, such binary oppositions also influence those who purchase objects, and, in this case, facilitate the pleasures of many adults take in seeing small humans visibly marked as gendered beings. This is now being further challenged by the demands for non-binary identification.

This initial point made by Kirkham and Attfield in 1996 is still valid. Even though the designers and brands are in essence guilty of forms of discrimination by falling in line with using the established gender norms, they do it because it’s what their consumers want and how they see development of business and creation of profit, because these stereotypical ‘Norms’ are seen to be Normal, acceptable and sub-consciously recognisable. “Thus we sometimes fail to appreciate the effects that particular notions of femininity and masculinity have on the conception, design, advertising, purchase, giving and uses of objects, as well as on their critical and popular reception”. (Kirkham and Attfield. 1996. The Gendered Object, p. 1).

With the help of the product language, gendered toys and clothes appear from an early age. The products are sorted as being ‘for girls’ and ‘for boys’ in the store as identified by Ehrnberger, Rasanen, Ilstedt, in 2012 in the article ‘Visualising Gender Norms in Design. International Journal of Design’. Product language is mostly used in the branding aspect of design, how a product or object is portrayed, it’s not only what the written language says. Product language relates to how the object is being showcased and portrayed through colours, shapes and patterns. A modern example of this is the branding for a Yorkie chocolate bar. Their slogan was publicly known as being gender bias towards mens. ‘Not for girls’, there is no hiding the fact that the language the company are using is being targeted at men because they are promoting a brand that is strong, chunky and ‘hard’ in an unsophisticated way which all have connotations of being ‘male’ and actually arguably as ‘alpha male’ to make it more attractive to men. Their chosen colours also suggest this with using navy blue, dark purple, yellow and red which are bold and is typically a ‘male’ generated pallette. Another example would be the advertisement of tissues. Tissues no matter where you buy them do the exact same thing irrespective of gender so why are some tissues being targeted at woman and some at men, could it be that this gender targeting be avoiding neutrality helps sell more tissues.

Product Language is very gender specific when it comes to clothing brands and toys for kids. “Girls should wear princess dresses, play with dolls and toy housework products, while boys should wear dark clothes with prints of skulls or dinosaurs, and should play with war toys and construction kits”. (Ehrnberger, Rasanen, Ilstedt, 2012. Visualising Gender Norms in Design. International Journal of Design). When branding things for children having the separation between girl and boy is extremely common, using language like ‘action’ which has male connotations or ‘princess’ which has female connotations appeals to the consumer because they are relatable words to them and to their children as well. In modern society most people find it difficult not to identify blue for boys and pink for girls especially from newborns. If you were to walk into any department store/ toy store or any store that caters to children you will see the separation between genders no matter if it is clothes to toys or anything in between. The separation is so obvious through the colour branding used. Girl side, pink, yellow, lilac are used, soft bright happy colours being used on toy babies and dolls to hats and scarfs. Conversely on the boys side blue, green and black, bold, dark, more primary colours being used for trucks to a pair of trousers.

Some companies have begun to notice how detrimental the separation is developing into and how it could possibly create a hold in advancing and opening up our society, example being John Lewis Partnership.

John Lewis is a massive department store, that has been in business for nearly fifty years. In 2017 they decided to scrap the girls section and boys sections for the clothing range in their store, and name it ‘Childs wear’ a gender neutral name. Allowing them to design clothing that allows children to wear whatever they want without being told ‘no, that is a boys top you can’t wear that because you’re a girl’ or vice versa. Caroline Bettis, head of children’s wear at John Lewis, said: “We do not want to reinforce gender stereotypes within our John Lewis collections and instead want to provide greater choice and variety to our customers, so that the parent or child can choose what they would like to wear”. Possibly the only issue with this stance is the price point, John Lewis is typically known for being a higher priced, high street store which means it isn’t accessible for everyone to shop there. Campaign group Let Clothes be Clothes commented on this “Higher-end, independent clothing retailers have been more pro-active at creating gender-neutral collections, but we hope unisex ranges will filter down to all price points. We still see many of the supermarkets, for example, using stereotypical slogans on their clothing,” (http://www.telegraph.co.uk/news/2017/09/02/john-lewis-removes-boys-girls-labels-childrens-clothes/).

Having a very well-known brand make this move should only enforce, encourage and inspire others to join in with the development. This change is a bold way of using Product language, even though it’s not for just one specific thing its advertising and marketing as well, meaning it is a whole rebrand of company, by not using gender specific words it takes away the automatic stereotypes you get when buying anything for children.

Equality is the state of being equal, be it in status, rights or opportunities, so when it comes to design why does this attribute get forgotten about. This isn’t a feminist rant, gender equality is affected in both male and females in the design world, when designing, everything should be equal and fair to both sexes. “Gender equality and equity in design is often highlighted, but it often results in producing designs that highlight the differences between men and women, although both the needs and characteristics vary more between individuals than between genders” (Hyde 2005). Hyde’s point is still contemporary and relevant, having gender equality in design is very important, but gender isn’t the sole issue, things can be designed for a specific gender but even if you are female you might not relate to the gender specific clothes for your sex. Design is to make and create something for someone or thing, not just gender. “Post- feminism argues that in an increasingly fragmented and diverse world, defining one’s identity as male or female is irrelevant, and can be detrimental”. (https://www.cl.cam.ac.uk/events/experiencingcriticaltheory/Satchell-WomenArePeople.pdf).

Recently many more up and coming independent brands and companies have been launching Unisex clothing brands for a multiple of years, most have been doing it and pushing the movement well before the topic of gender equality in design got into mainstream media as an issue. One company pushing out gender norms is Toogood London and another is GFW, Gender Free World. Gender Free World is a company that was created by a group of people who all think on the same wavelength when it comes to equality in gender. In fact their ‘Mission Statement’ sets this out as a core ethos (which incidently is obviously an influence on John Lewis when you look at the transferability of the phraseology) “GFW Clothing was founded in 2015 (part of Gender Free World Ltd) by a consortium of like-minded individuals who passionately believe that what we have in our pants has disproportionately restricted the access to choice of clothing on the high street and online.” https://www.genderfreeworld.com/pages/about-g. Lisa Honan is the cofounder of GFW, her main reason for starting a company like this was through ‘sheer frustration’ due to the lack of options for her taste and style on the market, with this she has shopped in male and female departments but never found anything fitted either especially if she was going for a male piece of clothing. During an interview with Honan by Saner she commented that the men’s shirts didn’t fit her because she had a woman’s body and iIt got her thinking, ‘ why is there a man’s aisle and a woman’s aisle, and why do you have to make that choice?’. She saw that you’re not able to make many purchases without being forced to define your own gender and this is reinforcing the separation between genders in fashion, if she feels this way many others must too, and they do or there wouldn’t be such a potential big business opportunity for it.

In my design practice of Communication Design, gender plays a huge role. Be it from colour choices, to certain typefaces being used, most work Communication designers need to create and produce, will either be to represent a brand or to actually brand a company, so when choosing options, potential gender stereotyping should come into consideration. The points mentioned above, showing how using the gender system, product language, gender norms and having equality and equity in design, reinforces graphic designers in a cautionary manner not to not fall down any pit holes when designing.

Designing doesn’t mean simply male or female, designing means to create and produce ‘something’ for ‘someone’ no matter their identifiable or chosen gender. If they are a company producing products targeted specifically at men and after a robust design concept examination I felt that using blue would enhance their brand and awareness to their target demographic then blue would be used, in just the same way using pink for them if it works for the customer, then put simply it works.

To conclude, exploring the key points of gender in the design world, only showcases the many issues there are.

2017-12-11-1513023430

The stigma surrounding mental illness: essay help free

Mental illness is defined as a health problem resulting from complex interactions between an individual’s mind, body and environment which can significantly affect their behavior, actions and thought processes. A variety of mental illnesses exist, impacting the body and mind differently, whilst affecting the individual’s mental, social and physical wellbeing to varying degrees. A range of psychological treatments have been developed in order to assist people living with mental illness, however social stigma can prevent individuals from successfully engaging with these treatments. Social or public stigma is characterized by discriminatory behavior and prejudicial attitudes towards people with mental health problems resulting from the psychiatric label they possess (Link, Cullen, Struening & Shrout, 1989). The stigma surrounding labelling oneself with a mental illness causes individuals to hesitate in regards to seeking help as well as resistance to treatment options. Stigma and its effects can vary depending on demographic factors including age, gender, occupation and community. There are many strategies in place to attempt to reduce stigma levels which focus on educating people and changing their attitudes towards mental health.

Prejudice, discrimination and ignorance surrounding mental illnesses results in a public stigma which has a variety of negative social effects towards individuals with mental health problems (Thornicroft et al 2007). An understanding of how stigma can be gained through the Attribution Model which identifies four steps involved in the formation of a stigma (Link & Phelan, 2001). The first step in the formation of a stigma is ‘labelling’, whereby key traits are recognized as portraying a significant difference. The next step is ‘stereotyping’ whereby these differences are defined as undesirable characteristics followed by ‘Separating’ which makes a distinction between ‘normal’ people versus the stereotyped group. Stereotypes surrounding mental illnesses have been developing for centuries, with early beliefs being that individuals suffering from mental health problems were possessed by demons or spirits. ‘Explanations’ such as these, promoted discrimination within the community, preventing individuals from admitting any mental health problems due to a fear of retribution (Swanson, Holzer, Ganju & Jono, 1990). The final step in the Attribution model described by Link and Phelan is ‘Status Loss’ which leads to the devaluing and rejection of individuals in the labelled group (Link & Phelan, 2001). An individual’s desire to avoid the implications of public stigma causes them to avoid or drop out of treatment for fear of being associated with negative stereotypes (Corrigan, Druss and Perlick, 2001). One of the main stereotypes surrounding mental illness, especially depression, and Post Traumatic Stress Disorder is that people with these illnesses are dangerous and unpredictable (Wang & Lai, 2008). Wang and Lai carried out a survey whereby 45% of participants considered people with depression as dangerous, however these results maybe subject to some reporting bias, yet a general inference can be made. Another survey found that a large proportion of people also confirmed that they were less likely to employ someone with mental health problems (Reavley & Jorm, 2011). This study highlights how public stigma can affect employment opportunities, consequently creating a greater barrier for anyone who would benefit from seeking treatment.

Certain types of stigma are unique and consequently more severe to certain groups within society. Approximately 22 soldiers or veterans commit suicide every day in the United States due to Post Traumatic Stress Disorder (PTSD) and depression. A study was performed surveying soldiers and found that out of all the people who met the criteria for a mental illness, only 38% would be interested in receiving help and only 23-30% actually ended up receiving professional help (Hoge et al, 2004). There is an enormous stigma surrounding mental illness within the military, due to their high values in mental fortitude, strength, endurance and self sufficiency (Staff, 2004). A soldier who admits to having mental health problems is deemed as not adhering to these values thus appearing weak or dependent, therefore placing a greater pressure on the individual to deny or hide any mental illness. Another contributor to soldiers avoiding treatment is a fear of social exclusion as it is common in military culture for some personnel to socially distance themselves from soldiers with mental health problems (Britt et al, 2007). This exclusion is due to the stereotype that mental health problems make a soldier unreliable, dangerous and unstable. Surprisingly, individuals with mental health problems who seek treatment are deemed more emotionally unstable than those who do not, thus the stigma surrounding therapy creates a barrier for individuals to start or continue their treatment (Porath, 2002). Furthermore, soldiers are also faced with the fear that seeking treatment will negatively affect their career, both in and out of the military, with 46 percent of employers considering PTSD as an obstacle when hiring veterans in a 2010 survey (Ousley, 2012). The stigma associated with mental illness in the military is extremely detrimental to the soldiers’ wellbeing as it prevents them from seeking or successfully engaging in the treatment for mental illnesses which have tragic consequences.

Adolescents and young adults with mental illness have the lowest rate for seeking professional help and treatment, despite the high occurrence of mental health problems. (Rickwood, Deane & Wilson, 2007). Adolescents’ lack of willingness to seek help and treatment for mental health problems is catalyzed by the anticipation of negative responses from family, friends and school staff. (Chandra & Minkovitz, 2006). A Queensland study of people aged 15–24 years showed that 39% of the males and 22% of the females reported that they would not request help for emotional or distressing problems (Donald, Dower, Lucke & Raphael, 2000). A 2010 survey of adolescents with mental health problems found that 46% described experiencing feelings of distrust, avoidance, pity and prejudice from family members. This portrays how negative family responses and attitudes impact an individual by creating a significant barrier to seeking help (Moses, 2010). Similarly, a study on adolescent depression also noted that teenagers who felt more stigmatized, particularly within the family, were less likely to seek treatment (Meredith et al., 2009). Furthermore, adolescents with unsupportive parents would struggle to pay expenses for treatment and transportation, further preventing successful treatment of the illness. Unfortunately, the generation of stigma is not unique to just family members, adolescents also report having felt discriminated by peers and even school staff (Moses, 2010). The main step to seeking help and engaging in treatment for mental illness is to acknowledge that there is a problem and to be comfortable enough to disclose this information to another person (Rickwood et al, 2005). However, in another 2010 study of adolescents, many expressed fear of being bullied by peers, subsequently leading to secrecy and shame (Kranke et al., 2010). The role of public stigma in generating this shame and denial is significant and thus can be defined as a factor in preventing adolescents from seeking support for their mental health problems. A 2001 study testing the relationship between adherence to medication (in this case, antidepressants) and perceived stigma levels determined that individuals who accepted the antidepressants were found to have lower perceived stigma levels (Sirey et al, 2001). This empirical data clearly illustrates the correlation between public stigma levels and an individual’s engagement in treatment, thus inferring that stigma remains a barrier for treatment. Public stigma can therefore be defined as a causative factor in the majority of adolescents not seeking support or treatment for their mental health problems.

One of the main strategies performed by society to assist in the reduction of the public stigma surrounding mental illness is education. Educating people about the common misconceptions of mental health challenges the inaccurate stereotypes and substitutes them with factual information (Corrigan et al., 2012). There is sufficient proof that people who have more information about mental health problems are less stigmatizing than people who are misinformed about them (Corrigan & Penn, 1999). The low cost and far-reaching nature are beneficial aspects of the educational approach. Educational approaches are often carried out on adolescents as it is believed that by educating children about mental illness, stigma can be prevented from emerging in adulthood (Corrigan et al., 2012). A 2001 study testing the effect of education on 152 students found that levels of stigmatization were lessened following the implementation of the strategy (Corrigan et al, 2001). However, it was also determined that by combining a contact based approach with the educational strategy would yield the highest levels of stigma reduction. Studies have also shown that a short educational program can be effective at reducing individuals’ negative attitudes toward mental illness and increases their knowledge on the issue (Corrigan & O’Shaughnessy, 2007). The effect of an educational strategy varies depending on what type of information is being communicated towards people. The information provided should deliver realistic descriptions of mental health problems and their causes as well as emphasizing the benefits of treatment. By delivering accurate information to people, the negative stereotypes surrounding mental illness can be decreased and the publics views on the controllability and treatment of psychological problems can be altered (Britt et al, 2007). Educational approaches mainly focus on improving knowledge and attitudes surrounding mental illness and do not focus directly on changing behavior. Therefore, a link cannot be clearly made as to whether educating people actually reduces discrimination. Although this remains a major limitation in today’s society, educating people at an early age can ensure that in the future discrimination and stigmatization will decrease. Reducing the negative attitudes surrounding mental illness can encourage those suffering from mental health problems to seek help. Providing individuals with correct information regarding the mechanisms and benefits of treatment, such as psychotherapy or drugs like antidepressants, increases their own mental health literacy and therefore increases the likelihood of seeking treatment (Jorm and Korten, 1997). People who are educated about mental health problems are less likely to believe or generate stigma surrounding mental illnesses and therefore contribute to reducing stigma which in turn will increase levels of successful treatment for themselves or other individuals.

The public stigma surrounding mental health problems is defined by negative attitudes, prejudice and discrimination. This negativity in society is very debilitating towards any individual suffering from mental illness and creates a barrier for seeking out help and engaging in successful treatment. The negative consequences of public stigma for individuals is to be excluded, not considered for a job or for friends and family to become socially distant. By educating people about the causes, symptoms and treatment of mental illnesses, stigma can be reduced as misinformation is usually a key factor in the promotion of harmful stereotypes. An individual will more likely engage in successful treatment if they are accepting of their illness and if stigma is reduced.

2016-10-9-1475973764

Frederick Douglass, Malcolm X and Ida Wells

Civil Rights are “the rights to full legal, social, and economic equality” . Following the American Civil War, slavery was officially abolished December 6th, 1865 in the United States of America (US). The Fourteenth and Fifteenth Amendments established a legal framework for political equality for African Americans; many thought that this would lead to equality between white and blacks however this was not the case. Despite slavery’s abolition Jim Crow racial segregation in the South meant that blacks would be denied political rights and freedoms and they would continue to live in poverty and inequality. It took nearly 100 years of campaigning until the Civil Rights and Voting Rights Acts were passed, making it illegal to discriminate based on race, colour, religion, sex or national origin and ensuring minority voting rights. Martin Luther King was prominent in the Modern Civil Rights Movement (CRM), playing a key role in legislative and social change. His assassination in 1968 marked the end of a distinguished life helping millions of African Americans across the US. The contribution played by black activists including political Frederick Douglass, militant Malcolm X and journalist Ida Wells throughout the period will be examined from a political, social and economic, perspective. When comparing their significance to that of King, consideration must be given to the time in which activists were operating and to prevailing social attitudes. Although King was undeniably significant it was the combined efforts of all the black activists and the mass protest movement in the mid-20th century that eventually led to African Americans gaining civil rights.

The significance of King’s role is explored through Clayborne Carson’s, ‘The Papers of Martin Luther King’ (Appendix 1). Carson, a historian at Stanford University, suggests that “the black movement would probably have achieved its major legislative victory without King’s leadership” Carson does not believe King was pivotal in gaining civil rights, but that he quickened the process. The mass public support shown in the March on Washington, 1963, suggests that Carson is correct in arguing that the movement would have continued its course without King. However, it was King’s oratory skill in his ‘I have a Dream’ speech that was most significant. Carson suggests key events would still have taken place without King. “King did not initiate…” the Montgomery bus boycott rather Rosa Parks did. His analysis of the idea of a ‘mass movement’ furthers his argument of King’s less significant role. Carson suggests that ‘mass activism’ in the South resulted from socio-political forces rather than ‘the actions of a single leader’. King’s leadership was not vital to the movement gaining support and legislative change would have occurred regardless. The source’s tone is critical of his significance but passive in the dismissal of King’s role. Phrases such as “without King” are used to diminish him in a less aggressive manner. Carson, a civil rights historian with a PhD from UCLA has written books and documentaries including ‘Eyes on the Prize’ and so is qualified to judge. The source was published in 1992 in conjunction with King’s wife, Coretta, who took over as head of the CRM after King’s assassination and extended its role to include women’s rights and LGBT rights. Although this may make him subjective, he attacks King’s role suggesting he presents a balanced view. Carson produced his work two decades after the movement and three decades before the ‘Black Lives Matter’ marches of the 21st century, and so was less politically motivated in his interpretation. The purpose of his work was to edit and publish the papers of King on behalf of The King Institute to show King’s life and the CRM he inspired. Overall, Carson argues that King had significance in quickening the process of gaining civil rights however he believes that without his leadership, the campaigning would have taken a similar course and that US mass activism was the main driving force.

In his book ‘Martin Luther King Jr.’ (Appendix 2) historian Peter Ling argues, like Carson, that King was not important to the movement but differs suggesting it was other activists who brought success and not mass activism. Ling believes that ‘without the activities of the movement’ King might just have been another ‘Baptist preacher who spoke well.’ It can be inferred that Ling believes King was not vital to the CRM and was just a good orator.

Ling’s reference to activist Ella Baker 1903-86 who ‘complained that “the movement made Martin, not Martin the Movement”’ suggests the King’s political career was of more importance to him than the goal of civil rights. Baker told King she disapproved of his being hero worshipped and others argued that he was ‘taking too many bows and enjoying them’. Baker promoted activists working together, as seen through her influence in the Student Nonviolent Coordinating Committee (SNCC). Clearly many believed King was not the only individual to have an impact on the movement, and so Ling’s argument that multiple activists were significant is further highlighted.

Finally, Ling argues that ‘others besides King set the pace for the Civil Rights Movement’ which explicitly shows how other activists working for the movement were the true heroes, they orchestrated events and activities yet it was King that benefitted. However King himself suggested that he was willing to use successful tactics suggested by others. The work of activists such as Philip Randolph who organise the 1963 March highlights how individuals played a greater role in moving the CRM forward than King. The tone attacks King using words such as ‘criticisms’ to diminish King’s role. Ling says that he has ‘sympathy’ for Miss Baker showing his positive tone towards other activists.

Ling was born in the UK studying History at Royal Holloway College and a MA in American Studies, Institute United States Studies, London. This gives Ling an international perspective, making him less subjective as he has no political motivations nevertheless this makes his interpretation limited in that he has no primary knowledge of civil rights in the US. The book was published in 2002 consequently this gives Ling hindsight making his judgment more accurate and less subjective as he is no longer affected by King’s influence. Similarly, his knowledge of American history and the CRM makes his work accurate. Unlike Carson who was a black activist and attended the 1963 March, White Ling was born in 1956 and was not involved with the CRM and so will have a less accurate interpretation. A further limitation is his selectivity; he gives no attention to the successes of King, including his inspiring ‘I had a dream speech’. As a result, it is not a balanced interpretation and thus its value is limited.

Overall, although weaker than Carson’s interpretation, Ling does give an argument that is of value when understanding King’s significance. Both revisionists, the two historians agree that King was not the most significant reason to gaining civil rights however differ on who or what they see as more important. Carson argues that mass activism was vital in success whereas Ling believes it to be other activists.

A popular pastor in the Baptist Church, King was the leader of the CRM when it gained black rights successes in the 1960s. He demonstrated the power of the church and NAACP in the pursuit of civil rights His oratory skills ensured many blacks and whites attended the protests and increased support. He understood the power of the media in getting his message to a wide audience and in putting pressure on the US government. The Birmingham campaign 1963, where peaceful protestors including children were violently attacked by police and his inspirational ‘Letter from Birmingham Jail’ that King wrote were heavily publicised. US society gradually sympathised with the black ‘victims’. Winning the Nobel Peace Prize gained the movement further international recognition. King’s leadership was instrumental in the political achievements of the CRM, inspiring the grassroots activism needed to apply enough pressure on government, which behind the scenes activists like Baker had worked tirelessly to build. Nevertheless there had been a generation of activists who played their parts often through the church publicising the movement, achieving early legislative victories and helping to kick-start the modern CRM and the idea of nonviolent civil disobedience. King’s significance is that he was the figurehead of the movement at the time when civil rights were eventually given.

Pioneering activist Frederick Douglass 1818-95 had political significance to the CRM holding federal positions which enabled him to influence government and Presidents throughout the Reconstruction era. He is often called the ‘father of the civil rights movement’. Douglass held several prominent roles including US Marshall for DC. He was the first black to hold high office in government and in 1872 the first African American nominated for US Vice President particularly significant as blacks’ involvement in politics was severely restricted at the time. Like King he was a brilliant orator, lecturing on civil rights in the US and abroad. When compared to King Douglass was significant in the CRM. He promoted equality for blacks and whites, although unlike King he did not ultimately achieve black civil rights this was because he was confined by the era that he lived.

The contribution of W.E.B Du Bois 1868-1963 was significant as he laid the foundations for future black activists, including King, to build. In 1909 he established The National Association for the Advancement of Coloured People (NAACP) the most important 20th century black organisation other than the church. King became a member of NAACP and used it to organise the bus boycott and other mass protests. As a result, the importance of Du Bois to the CRM is that King’s success depended on NAACP therefore Du Bois is of similar significance, if not more so than King in pursuing black civil rights.

Ray Stanard Baker’s article in 1908 for The American Magazine speaks of Du Bois’ enthusiastic attitude to the CRM, his intelligence and knowledge of African Americans. (Appendix 3) The quotation of Du Bois at the end of the extract reads “Do not submit! agitate, object, fight,” showing he was not passive but preaching messages of rebellion. The article describes him with vocabulary such as “critical” and “impatient” showing his radical passionate side. Baker also states Du Bois’ contrasting opinions compared to Booker T Washington one of his contemporary black activists. This is evident when it says “his answer was the exact reverse of Washington’s” demonstrating how he was different to the passive, ‘education for all’ Washington. Du Bois valued education, but believed in educating an elite few, the ‘talented tenth’ who could strive for rapid political change. The tone is positive towards Du Bois praising him for being a ferocious character dedicated to achieving civil rights. Through phrases such as “his struggles and his aspirations” this dedicated and praising tone is developed. The American Magazine founded in 1906 was an investigative US paper. Many contributors to the magazine were ‘muckraking’ journalists meaning that they were reformists who attacked societal views and traditions. As a result, the magazine would be subjective, favouring radical Du Bois’, challenging the Jim Crow South and appealing to its radical target audience. The purpose of the source was to confront the racism in the US and so would be political motivated making it subjective regarding civil rights. However some evidence suggests that Du Bois was not radical, his Paris Exposition in 1900 showed the world real African Americans. Socially he made a major contribution to black pride contributing to the black unity felt during the Harlem Renaissance. The Renaissance popularised black culture and so was a turning point in the movement, in the years after the CRM grew in popularity and became a national issue. Finally, the source refers to his intelligence and educational prowess; he carried out economic studies for the US Government and was educated at Harvard and abroad. As a result, it can be inferred that Du Bois rose to prominence and made a significant contribution to the movement due to his intelligence and his understanding of US society and African American culture. One of the founders of the NAACP his significance in attracting grassroots activists and uniting black people was vital. The NAACP leader Roy Wilkins at the March on Washington highlighted his contribution following his death the day before, and said, “his was the voice that was calling you to gather here today in this cause.” Wilkins is suggesting that Du Bois had started the process which had led to the March.

Rosa Parks 1913-2005 and Charles Houston 1895-1950 were NAACP activists who benefitted from the work of Du Bois and achieved significant political success in the CRM. Parks the “Mother of the Freedom Movement.” was the spark that ignited the modern CRM by protesting on a segregated bus. Following her refusal to move to the black area she was arrested. Parks, King and NAACP members staged a yearlong bus boycott in Montgomery. Had it not been for Parks, King may never have had the opportunity to rise to prominence or had mass support for the movement and so her activism was key in shaping King. Lawyer Houston helped defend black Americans, breaking down the deep rooted discriminative and segregation laws in the South. It was his ground-breaking use of sociological theories that formed the basis of the Brown v. the Board of Education 1954 that ended segregation in schools. Although compared to King, Houston is less prominent; his work was significant in reducing black discrimination gaining him the nickname ‘The man who killed Jim Crow ‘. Nonetheless had Du Bois’ NAACP not existed, Parks and Houston would never have had an organisation to support them in their fight, likewise King would never have gained the mass support for civil rights.

Trade unionist Philip Randolph 1890-1979 brought about important political changes. His pioneering use of nonviolent confrontation had a significant impact on the CRM and was widely used throughout 1950’s and 60’s. Randolph had become a prominent civil rights spokesman after organising the Brotherhood of Sleeping Car Porters in 1925, the first black majority union. Mass unemployment after the US Depression led to civil rights becoming a political issue and US trade unions supported equal rights and black membership grew. Randolph was striving for political change that would bring equality. Aware of his influence in 1941 he threatened a protest march which pressured President Roosevelt into issuing Executive Order 8802 an important early employment civil rights victory. There was a shift in the direction of the movement focussing on the military because after the Second World War black soldiers felt disenfranchised and became the ‘foot soldiers of the CRM’ fighting for equality in these mass protests. Randolph led peaceful protests which resulted in President Truman issuing Executive Order 9981 desegregating of the Armed Forces showing his key political significance. Significantly this legislation was a catalyst leading to further desegregation laws. His contribution to the CRM, support of King’s leadership and masterminding of the 1963 March made his significance equal to King’s.

King realised that US society needed to change and inspired by Ghandi he too used non-violent mass protest to bring about change, including the Greensboro Sit-ins to de-segregate lunch counters. Similarly activist Booker T Washington 1856-1915 significantly improved the lives of thousands of southern blacks who were poorly educated and trapped in poverty following Reconstruction through his pioneering work in black education. He founded the Tuskegee Institute. In his book ‘Up from Slavery: An Autobiography’ (Appendix 4) he suggests that gaining civil rights would be difficult and slow, but all blacks should work on improving themselves through education and hard work to peacefully push the movement forward. He says that “the according of the full exercise of political rights” will not be an “overnight gourdvine affair” and that a black should “deport himself modestly in regard of political claim”. Inferring that Washington wanted peaceful protest and acknowledged the time it would take to gain equality, making his philosophy like King’s. Washington’s belief in using education to gain the skills to improve lives and fight for equality is evident through the Tuskegee Institute which educated 2000 blacks a year.

The tone of the source is peaceful, calling for justice in the South. Washington uses words such as “modestly” in an attempt for peace and “exact justice” to show how he believes in equal political rights for all. The reliability of the source is mixed. Washington is subjective as he wants his autobiography to be read, understood and supported. The intended audience would have been anyone in the US, particularly blacks whom Washington wanted to inspire to protest and white politicians who would advance civil rights. The source is accurate, it was written in 1901, during the Jim Crow South. Washington would have been politically motivated in his autobiography; demanding legislative change to give blacks civil rights. There would have also been an educational factor that contributed to his writing, his Tuskegee Institute and educational philosophy, having a deep impact on his autobiography.

The source shows how and why the unequal South should no longer be segregated. Undoubtedly significant, as his reputation grew he became an important public speaker and is considered to have been a leading spokesman for black people and issues like King. An excellent role model a former slave who influenced statesmen he was the first black to dine with the President (Roosevelt) at the White House showing blacks they could achieve anything. Activist Du Bois described him as “the one recognised spokesman of his 10 million fellows … the most striking thing in the history of the American Negro”. Although not as decisive in gaining civil rights as King, Washington was important in preparing blacks for urban and working life but also empowering the next generation of activists.

Inspired by Washington the charismatic Jamaican radical activist Marcus Garvey 1880-1940 arrived in the US in 1916. Garvey had a social significance to the movement striving to better the lives of US blacks. He rose to prominence during the ‘Great Migration’ when poor southern blacks were moving to the industrial North, making Southern race problems into national ones. He founded the Universal Negro Improvement Association (UNIA) which had over 2,000,000 members in 1920. He appealed to discontented First World War black soldiers who had returned home to violent racial discrimination. The importance of the First World War was paramount in enabling Garvey to gain the vast support he did in the 1920s. Garvey published a newspaper, the Negro World which spread his ideas about education and Pan-Africanism, the political union of all people of African descent. Garvey like King gained a greater audience for the CRM, in 1920 he led an international convention in Liberty Hall, and 50,000 parade through Harlem. Garvey inspired later activists such as King.

2018-7-12-1531405547

Reflective essay on use of learning theories in the classroom: college application essay help

Over recent years teaching theories have been more common in the class room, all in the hope of supporting students and been able to further their knowledge by understanding their abilities and what they need to develop. As a teacher it is important to embed teaching and learning theories in the class room, therefore as teachers we can teach the students to their individual needs.

Throughout my research I will be looking in to the key differences of two different theories by comparing two theories used in class rooms today. I will also be critically analysing what the role of the teacher is in the life-long learning sector, by analysing the professional and legislative frameworks, as well as looking for a deeper understanding into classroom management, and why it is used and how to manage different class room environments, such as managing inclusion and how it is supported throughout different methods.

Overall, I will be linking this to my own teaching, at A Mind Apart (A Mind Apart, 2019). Furthermore, I will have the ability to understand about interaction within the classroom and why communication between fellow teachers and students is important.

The role of the teacher is known for been the forefront of knowledge. Therefore, this suggest that the role of the teacher is to pass their knowledge on to their students, known as a ‘chalk and talk’ approach, although this approach is outdated and there are various ways we now teach in the classroom. Walker believes that, ‘the modern teacher is facilitator: a person who assists students to learn for themselves’ (Reece & Walker 2002) I for one cannot say I fully believe in this approach, as all students have individual learning needs, and some may need more help than others. As the teacher, it is important to know the full capability of your learners, therefore lessons can be structure to the learner’s need. It is important for the lessons to involve active learning and discussions, these will help keep the students engaged and motivated during class. Furthermore, it is important to not only know what you want the students the be learning, but it is just as important that you know as the teacher, what you are teaching; it is important to be prepared and be fully involved in your own lesson, before you go in to any class, as a teacher I make my students my priority, therefore, I leave any personal issues outside the door so I am able to give my students the best learning environment they could possibly have; not only is it important to do this but keep updated on your subject specialism, I always double check my knowledge of my subject regularly, I find following this structure my lesson will normally run at a smooth pace.

Taking in to consideration the students I teach are vulnerable there may be minor interruptions. It is not only important that you as the teacher leave your issues at the door, but to make sure the room is free from distractions, most young adults have a lot of situations which are they find hard to deal with, which means you as the teacher are not only there to educate but to make the environment safe and relaxing for your students to enjoy learning. As teachers we not only have the responsibility of making sure the teaching takes place, but we also have the responsibilities of exams, qualifications and Ofsted; and as a teacher in the life-long learning sector it is also vital that you evaluate not only your learner’s knowledge, but you evaluate yourself as a teacher, therefore, you are able to improve your teaching strategies and keep up to date.

When assessing yourself and your students it is important not to wait until the end of a term to do this and evaluate throughout the whole term. Small assessments are a good way of doing this, it doesn’t always have to be a paper examination, you can equally you can do a quiz, ask questions, use various fun games, or even use online games such as Kahoot to help your students regain their knowledge. This will not only help you as a teacher understand your students’ abilities, but it will also help your students know what they need to work on for next term.

Alongside the already listed roles and responsibilities of being a teacher in the life-long learning sector, Ann gravels explains that,

‘Your main role as a teacher should be to teach your students in a way that actively involves and engages your students during every session’ (Gravells, 2011, p.9.)

Gravels passion is solely based on helping new teachers, gain the knowledge and information they need to become successful in the lifelong learning sector. Gravels’ has achieved this by writing various text books on the lifelong learning sector. Gravels’ states in her book ‘Preparing to teach in the lifelong learning sector’, (Gravells, 2011) the importance of the 13 legislation acts. Although I find each of them equally important as each other, I am going to mention the ones I am most likely to use during my teacher training with A Mind Apart.

Safeguarding vulnerable groups act (2006) – Working with young vulnerable adults, I find this act is the one I am most likely to use during my time with A Mind Apart. In summary, the Act explains the following: ‘The ISA will make all decisions about who should be barred from working with children and vulnerable adults.’ (Southglos.gov.uk, 2019)
The Equality act (2010) – As I will be working with different sex, race and disabilities in any teaching job which I encounter, I believe The Equality act (2010) is fundamental to mention. The Equality act 2010 covers discrimination under one legalisation.
Code of professional practice (2008) – This act covers all aspects of the activities we as teachers in the lifelong learning sector may encounter. Based around seven behaviours which are: Professional practice, professional integrity, respect, reasonable care, criminal offence disclosure, and reasonability during institute investigations.

(Gravells, 2011)

Although, all acts are equally important, those are the few acts I would find myself using regularly. I have listed the others below:

Children act (2004)
Copyright designs and patents act (1988)
Data protection act (1998)
Education and skills act (2008)
Freedom of information act (2000)
Health and safety at work act (1974)
Human rights act (1998)
Protection of children act (POCA) (1999)
The Further education teachers’ qualification regulations (2007)

(Gravells, 2011)

Teaching theories are much more common in classrooms today, however there are three main teaching theories which us as teachers are known for using in the classroom daily. Experiments show that we find the following theories work the best: behaviourism, cognitive constructivist, and social constructivist, taking these theories into consideration I will look at comparing skinners behaviourist theory and taking a look at Maslow (1987) ‘Hierarchy Of Needs’ which was introduced in 1954, and how I could use these theories in my teaching as a drama teacher in the life-long learning sector.

Firstly, looking in to behaviourism is mostly described as the teacher questioning and the student responds the way you want them to. Behaviourism is a theory, which in a way can take control of how the student acts/behaves, if used to its full advantage. Keith Pritchard (Language and Learning, 2019) describes behaviourism as ‘A theory of learning focusing on observable behaviours and discounting any mental activity. Learning is defined simply as the acquisition of a new behaviour.’ (E-Learning and the Science of Instruction, 2019).

An example of how behaviourism works, is best demonstrated through the work of Ivan Pavlov (Encyclopaedia Britannica, 2019) Pavlov was a physiologist during the start of the twentieth century and used a method called ‘conditioning’, (Encyclopaedia Britannica, 2019) which is a lot like the behaviourism theory. During Pavlov’s experiment, he ‘conditioned’ the dogs to make them salivate when they heard a bell ring, as soon as the dogs hear the bell, they associate it with getting fed. As a result of this the dogs were behaving exactly how Pavlov wanted them to behave, therefore they had successfully been ‘conditioned’. (Encyclopaedia Britannica, 2019)

During Pavlov’s conditioning experiment there are four main stages in the process of classical conditioning, these include,

Acquisition, which is the initial learning;
Extinction, meaning the dogs in Pavlov’s experiment may not respond, if no food is presented to them;
Generalisation, after learning a response, the dog may now respond to other stimuli, with no further training. For example: if a child falls off a bike, a injures their self, they may be frightened to get back on to the bike again. And lastly,
Discrimination, which is the opposite of generalisation, for example the dog will not respond in the same way to another stimulus as they did the first one.

Pritchard states ‘It Involves reinforcing a behaviour by rewarding it’ which is what Pavlov’s dog experiment does. Although rewarding behaviour can be good, it can also be negative, such as bad behaviour can be discouraged by punishment. The key aspects of conditioning are as follows: Reinforcement, Positive reinforcement, Negative reinforcement, and shaping. (Encyclopaedia Britannica, 2019)

Behaviourism is one of the learning theories I use in my teaching today, working at A Mind Apart, (A Mind Apart, 2019) I work with challenging young people. The A Mind Apart organisation, a performing arts foundation especially targeted at vulnerable and challenging young people, to help better their lives; hence, on the off chance that I use the behaviourism theory it will admirably inspire the students to do better. Using behaviourism with respect to the standard of improvement and reaction, behaviourism is driven by the teacher and is responsible for how the student will carry on and how it is finished. This theory came around in the early twentieth century and concentrated how individuals behave; with respect to the work I do at A Mind Apart, as a trainee performing arts teacher, I can identify with behaviourism limitlessly, every Thursday, when my 2 hour class is finished, I at that point take 5 minutes out of my lesson to award a ‘Star of the week’ It is an incredible method to urge students to carry on the way they have been, if behaving and influence them to endeavour towards something ion the future. Furthermore, I have discovered that this theory can function admirably in any expert subject and not just performing arts. The behaviourism theory is straightforward as it depends just on detectable conduct and portrays a few widespread laws of conduct. It’s positive and negative support strategies can be extremely effective. The students who we teach in general at A Mind Apart, are destined to come to us with emotional well-being issues, which is the reason most of the time these students find that it is hard to focus, or even learn in a school environment; we are there to give a comprehensive learning environment and utilize the time we have with them, so they can move forward at their own pace and take a leap at their scholarly aptitudes and socialising in the future when they leave us, to move on to college or even jobs, our work with them will also help them meet new individuals, and gain new useful knowledge by using behaviourism teaching theory. Despite the fact some of them may struggle with obstacles during their lives; although it is not always easy to manipulate someone in to thinking or behaving the way you do or want them to, with time, and persistence I have found that this theory can work. It is known that…

‘Positive reinforcement or rewards can include verbal feedback such as ‘That’s great, you’ve produced that document without any errors’ or ‘You’re certainly getting on well with that task’ through to more tangible rewards such as a certificate at the end’… (Information List of topics Assessment Becoming a teacher Continuing Professional Development (CPD) Embedding maths et al., 2019)

Gagne (Mindtools.com, 2019) was an American educational psychologist best known for his nine levels of learning; Regarding Gagne’s nine levels of learning, (Mindtools.com, 2019) I have done something in depth research, in just a couple of his nine levels of learning therefore I will be able to understand the levels and how his theory link to behaviourism.

Create an attention-grabbing introduction.
Inform learner about the objectives.
Stimulate recall of prior knowledge.
Create goal-centred eLearning content.
Provide online guidance.
Practice makes perfect.
Offer timely feedback.
Assess early and often.
Enhance transfer of knowledge by tying it into real world situations and applications.

(Mindtools.com, 2019)

Informing the learner of the objectives, is the one level I can relate to the most during my lessons, I find it important in many ways why you as the teacher, should let your students know what they are going to be learning during that specific lesson. This will help them have a better understanding throughout the lesson, as even more engage them from the very start. Linking it to behaviourism during my lessons, I tell my students what I want from them that lesson, and what I expect them, with their individual needs, to be learning or have learnt by the end of lesson. If I believe learning has taking place during my lesson, I will reward them with a game of their choice at the end of the lesson. In their mind they understand they must do as they are asked by the teacher, or the reward to play a game at the end of lesson, will be forfeited. As studies show, during Pavlov’s (E-Learning and the Science of Instruction, 2019) dog experiment that this theory does work, it can take a lot of work. I have built a great relationship with my students, and most of the time they are willing to work to the best of their ability.

Although Skinners’ (E-Learning and the Science of Instruction, 2019) behaviourist theory is based around manipulation, Maslow’s ‘Hierarchy Of Needs’ (Very well Mind, 2019) believes that behaviour and the way people act is based upon childhood events, therefore it is not always easy to manipulate in to the way you think, as they may have had a completely different upbringing, which will determine how they act. Maslow (Very well Mind, 2019) feels, if you remove the obstacles that stop the person from achieving, then they will have a better chance to achieve their goals; Maslow argues that there are five different needs which must be met in order to achieve this. The highest level of needs is self-actualisation which means the person must take full reasonability for their self, Maslow believes that people can go through to the highest levels, if they are in an education which can produce growth. Below is the table of Maslow’s ‘Hierarchy of needs’ (Very well Mind, 2019)

(Information List of topics Assessment Becoming a teacher Continuing Professional Development (CPD) Embedding maths et al., 2019)

In an explanation the table lets you know your learners needs throughout different levels, during their time in your learning environment, all learners may be at different levels, but should be able to progress on to the next one when they feel comfortable to do so. There may be knockbacks which your learners as individuals will face, but is the needs that will motivate the learning, although you may find that not all learners want to progress through the levels of learning at that moment in time, for example, if your learner if happy with the progress they have achieved so far and are content with life, they may find they want to stay at that certain level.

It is important to use the levels to encourage your learners by working up the table.

Stage 1 of the table is the physiological need – are your learners comfortable in the environment you are providing, are they hungry or thirsty? Your learners may even be tired; taking all these factors in to consideration, these may stop learning taking place. Therefore, it is important to meet all your learners’ physiological needs.

Moving up the table to safety and security – make your learners feel safe in an environment where they can relax, feel at ease. Are your learners worried about anything in particular? If so, can you help them overcome their worries.

Recognition – do your learners feel like they are part of the group? It is important to help those who don’t feel that they are part of the group bond with others. Help your learners belong and make them feel welcome. One recognition is in place your learners will then start to build their self-esteem, are they learning something useful, although your subject specialism may be second to none, it is important that your passion and drive shines through your teaching; overall this will result in the highest level: Self actualisation, are your learners achieving what they want to do? Make the sessions interesting and your learners will remember more about the subject in question. (Very well Mind, 2019)

Furthermore, classroom management comes in to force with any learning theory you use whilst teaching. Classroom management is made up of various techniques and skills that we as teacher utilize. Most of today’s classroom management systems are highly effective as they increase student success. As I am now a trainee teacher, I understand that classroom management can be difficult at times, therefore I am always researching different methods on how to manage my class. Although I don’t believe entirely that this comes from just methods, but if your pupils respect you as a teacher, and they understand what you expect of them whilst in your class, you should be able to manage the class fine; relating this with my placement at A Mind Apart, my students know what I expect of them and from that my classroom management is normally good…following this there are a few classroom management techniques I tend to follow:

Demonstrating the behaviour, you want to see – eye contact whilst talking, phones away in bags/coats, listen when been spoken to and be respectful of each other, these are all good codes of conduct to follow, and they are my main rules whilst in the classroom.
Celebrating hard work or achievements – When I think a student has done well, we as a group will celebrate their achievement, whether It be in education or out, a celebration always helps with classroom management.
Make your session engaging and motivating – This is something all us trainee teachers find difficult within our first year, as I have found out personally from the first couple of months, I have learnt to get to know your learners, understand what they like to do, and what activity’s keep them engaged.
Build strong relationships – I believe having a good relationship with your students is one of the key factors to managing a class room. It is important to build trust with your students, make them feel safe and let them know they are in a friendly environment.

When it comes to been in a classroom environment, not all students will adhere to this, therefore they may require a difference kind of structure to feel included. A key example of this is students with physical disabilities, you may need to adjust the tables or even move them out the way, you could also adjust the seating so a student may be able to see more clearly if they have hearing problems maybe write more down on the board, or even give them a sheet at the start of the lesson, which lets them know what you will be discussing and any further information they may need to know, not only do you need to take physical disabilities in to consideration but it is also important to cater for those who have behavioural problems, it is important to adjust the space to make your students feel safe whilst in your lesson.

Managing your class also means that sometimes you may have to adjust your teaching methods to suit all in your class and understand that it is important to incorporate cultural values. Whilst in the classroom, or even giving out home work you may need to take in to consideration that some students, especially those with learning difficulties, may take longer to do work, or even need additional help.

Conclusion

Research has given me a new insight into how many learning theories, teaching strategies and classroom management strategies there are, there are books and websites which help you achieve all the things you need to be able to do in your classroom. Looking back over this essay I looked in to the two learning theories that I am most likely to use.

2019-1-7-1546860682

Synchronous and asynchronous remote learning during the Covid-19 pandemic

Student’s Motivation and Engagement

Motivation plays an important role in student engagement. Saeed and Zyngier (2012) contend that in order to assess student motivation, researchers should also have to examine engagement in and as part of learning. This manifests that there is a relationship between student motivation and engagement. As support to this relationship, Hufton, Elliot, and Illushin (2002) believe that high levels of engagement show high levels of motivation. In other words, when the levels of motivation of students are high that is when their levels of engagement are also high.

Moreover, Dörnyei (2020) suggests that the concept of motivation is closely associated with engagement, and with this he asserted that motivation must be ensured in order to achieve student engagement. He further offered that any instructional design should aim to keep students engaged, regardless of the learning context, may it be traditional or e-learning. In addition, Lewis et al (2014) reveal that within the online educational environment, students can be motivated by delivering an engaging student-centered experience consistently.

In the context of Student-Teacher Dialectical Framework embedded with Self-Determination Theory, Reeve, J. (2012) reveal three newly discovered functions of student engagement. First, is that engagement bridges students’ motivation to highly valued outcomes. Second, is that student engagement affects the future quality of learning environment especially in the flow of instruction, the external events it has, and the teacher’s motivating style. Third, is that student engagement changes motivation, which means that engagement cause changes in motivation in the future. This highlights that student motivation is both a cause and a consequence. This assertion that engagement can cause changes motivation is embedded on the idea that students can take actions to meet their own psychological needs and enhance the quality of their motivation. Further, Reeve, J. (2012) asserts that students can be and are architects of their own motivation, at least to the extent that they can be architects of their own course-related behavioral, emotional, cognitive, and agentic engagement.

Synchronous and Asynchronous Learning

The COVID-19 pandemic brought a great disaster on the education system around the world. Schools have struggled due to the situation in which led them to cessation of classes for an extended period of time and other restrictive measures that later on impede the continuance of face-to face classes. In consequence, there is a massive change towards the educational system around the world while educational institutions strive and put their best efforts to resolve the situation. Many schools had addressed the risks and challenges in continuing education amidst the crisis by shifting conventional or traditional learning into distance learning. Distance learning is a form of education through the support of technology that is conducted beyond physical space and time (Papadopulou, 2020). Distance learning is an online education that provides opportunities towards educational advancement and learning development among learners worldwide. In order to sustain the educational goal of our country, distance learning is a new way of providing quality education as much as possible among public and private institutions especially to those pursing in higher education. The instructional delivery in considering distance education can be through synchronous or asynchronous mode of learning, in which students can engage and continually attain quality education despite of the pandemic situation.

Based on the definition of Easy LMS Company (2020), synchronous learning refers to a learning event in which a group of participants is engaged in learning at the same time (e.g., zoom meeting, web conference, real- time class) while asynchronous learning refers to the opposite, in which the instructor, the learner, and other participants are not engaged in the learning process at the same time. Thus, there is no real-time interaction with other people (e.g., pre-recorded discussions, self- paced learning, discussion boards). According to article issued by University of Waterloo (2020), synchronous learning is a form of learning that is live presentation which allows the students to ask questions while asynchronous can be a recorded presentation that allows students to have time in reflecting before asking questions. Synchronous learning is a typical meeting of students in a virtual setting and there is a class discussion where everybody can participate actively. Asynchronous learning is the utilization of learning platform or portal where the teachers or instructors can post and update lessons or activities and student can work at their own pace. These types of class instruction are commonly observed in these times and students have their own preferences when it comes to what best works for them.

In comparing both of the types of learning, it is valuable to know the advantages and disadvantages in order to see how it will really be an impact towards students. Wintemute (2021) discussed synchronous learning has greater engagement and direct communication is present, but it requires strong internet connection. On the other hand, asynchronous learning is advantageous in schedule flexibility and more accessible, yet it is less immersive and the challenges in procrastination, socialization and distraction are present. Students in synchronous learning tend to adapt the changes of learning with classmates in a virtual setting while asynchronous learning introduced a new setting where students can choose when to study.

In the middle of the crisis, asynchronous learning can be more favorable than synchronous because most of us are struggling in this pandemic. One of the principal advantages of asynchronous online learning is that it offers more flexibility, allowing learners to set their own schedule and work at their own pace (Anthony and Thomas, 2020). In contrast, synchronous learning allows students to feel connected in a virtual world and it can give them assurance of not being isolated amidst studying lessons because they can have a live interactions and exchange of ideas and other valuable inputs for the class to understand the lessons well by the help of teachers. The main advantages of synchronous learning are that instructors can explain specific concepts when students are struggling and students can also get immediate answers about their concerns in the process of learning (Hughes, 2014). In the article of Delgado (2020), the advantages and disadvantages will not be effective if they do not have a pedagogical methodology considering the technology and its optimization. Furthermore, the quality of learning depends on good planning and design by reviewing and evaluating each type of learning modality.

Synthesis

Motivating students has been a key challenge facing instructors in the contexts of online learning (Zhao et. al 2016). In which motivation is one of the bases of the student to do well in their studies. When students are motivated, the outcome is a good mark. In short, motivation is a way to pushed them study more to get high grades. According to Zhao (2016) motivation in an online learning environment revealed that there are learning motivation differences among students from different cultural backgrounds. Motivation is described as “the degree of people’s choices and the degree of effort they will put forth” (Keller, 1983). Learning is closely linked to motivation because it is an active process that necessitates intentional and deliberate effort. Educators must build a learning atmosphere in which students are highly encouraged to participate both actively and productively in learning activities if they want to get the most out of school (Stipek, 2002). John Keller (1987) in his study revealed that attention and motivation will not be maintained unless the learner believes the teaching and learning are relevant. According to Zhao (2016), a strong interest in a topic will lead to mastery goals and intrinsic motivation.

Engagement can be perceived with the interaction between students and teachers in online classes. Student engagement, according to Fredericks et al. (2004), is a meta-construct that includes behavioral, affective, and mental involvement. Despite the fact that there is a broad body of literature on behavioral (i.e., time on task), emotional (i.e., interest and value), and cognitive engagement (i.e., self-regulation and learning strategies), what sets engagement apart is its capacity as a multifaceted strategy. While there is substantial research on behavioral (i.e., time on task), emotional (i.e., interest and value), and cognitive engagement (i.e., self-regulation and learning strategies what distinguishes engagement is its ability as a multidimensional or “meta”-construct that encompasses all three dimensions.

Motivation plays an important role in student engagement. Saeed and Zyngier (2012) contend that in order to assess student motivation, researchers should also have to examine engagement in and as part of learning.

Lewis et al (2014) reveal that within the online educational environment, students can be motivated by delivering an engaging student-centered experience consistently.

In the context of Student-Teacher Dialectical Framework embedded with Self-Determination Theory, Reeve, J. (2012) reveal three newly discovered functions of student engagement. First, is that engagement bridges students’ motivation to highly valued outcomes. Second, is that student engagement affects the future quality of learning environment especially in the flow of instruction, the external events it has, and the teacher’s motivating style. Third, is that student engagement changes motivation, which means that engagement cause changes in motivation in the future. Distance learning is an online education that provides opportunities towards educational advancement and learning development among learners worldwide. In order to sustain the educational goal of our country, distance learning is a new way of providing quality education as much as possible among public and private institutions especially to those pursing in higher education. The instructional delivery in considering distance education can be through synchronous or asynchronous mode of learning, in which students can engage and continually attain quality education despite of the pandemic situation.

According to article issued by University of Waterloo (2020), synchronous learning is a form of learning that is live presentation which allows the students to ask questions while asynchronous can be a recorded presentation that allows students to have time in reflecting before asking questions. Synchronous learning is a typical meeting of students in a virtual setting and there is a class discussion where everybody can participate actively. Asynchronous learning is the utilization of learning platform or portal where the teachers or instructors can post and update lessons or activities and student can work at their own pace. These types of class instruction are commonly observed in these times and students have their own preferences when it comes to what best works for them.

In comparing both of the types of learning, it is valuable to know the advantages and disadvantages in order to see how it will really be an impact towards students. Wintemute (2021) discussed synchronous learning has greater engagement and direct communication is present, but it requires strong internet connection. On the other hand, asynchronous learning is advantageous in schedule flexibility and more accessible, yet it is less immersive and the challenges in procrastination, socialization and distraction are present.

In the middle of the crisis, asynchronous learning can be more favorable than synchronous because most of us are struggling in this pandemic. One of the principal advantages of asynchronous online learning is that it offers more flexibility, allowing learners to set their own schedule and work at their own pace (Anthony and Thomas, 2020). In contrast, synchronous learning allows students to feel connected in a virtual world and it can give them assurance of not being isolated amidst studying lessons because they can have a live interactions and exchange of ideas and other valuable inputs for the class to understand the lessons well by the help of teachers.

2022-1-8-1641647078

‘Peak Oil’ – what are the solutions?

The ability to harness energy sources and put them towards a productive use has played a crucial role in economic development worldwide. Easily accessible oil helped to fuel continued expansion in the 20th century. Agricultural production was transformed by motorised farm equipment and petroleum-based fertilisers and pesticides. Cars, trucks and airplanes powered by oil products revolutionised the transportation of people and goods. Oil provides fuel for home heating, electricity production, and to power industrial and agricultural equipment. It also provides the source material for the construction of plastics, many fertilisers and pesticides and many industrial chemicals and materials. It is now difficult to find any product that does not require the use of oil at some point in the production process.

Oil has several advantages over other fossil fuels: it is easily transportable and energy-dense, and when refined it is suitable for a wide variety of uses. Considering the important role that oil plays in our economy, if persistent shortages were to emerge, the economic implications could be enormous. However, there is no consensus as to how seriously the treat of oil resources depletion should be taken. Some warn of a colossal societal collapse in the not-too-distant future, while others argue that technological progress will allow us to shift away from oil before resource depletion becomes an issue.

How much of a problem oil depletion poses depends on the amount of oil that remains accessible at reasonable cost, and how quickly the development of alternatives allows the demand for oil to be reduced. This is what the term ‘peak oil’ means the point of when the demand for oil outstrips the availability. Demand and supply each evolve over time following a pattern that is based in historical data, while supply is also constrained by resource availability. There is no mechanism for market on its own to address concerns about climate change. However, if policies are put in place to price the costs of climate change into the price of fossil fuel consumption, then this should trigger market incentives that should lead efficiently to the desired emission reductions.

A while ago the media was filed with stories about peak oil and it was even in an episode of the Simpsons. Peak oil in basic term means that the point we have used all the easy to extract oil and are only left with the hard to reach which in term is expensive to refine. There is still a huge amount of debate amongst geologist and Petro- industries experts about how much oil is left in the ground. However, since then the idea of a near-term peak in the world oil supplies has been discredited. The term that is now used is Peak Oil demand, the idea is that because of the proliferation of electric cars and other sources of energy means that demand for oil will reach a maximum and start to decline and indeed consumptions levels in some parts of the world have already begun to stagnate.

The other theory that has been produce is that with supply beginning to exceed demand there is not enough investment going into future oil exploration and development. Without this investment production will decline but production is not declining due to supply problems just that we are moving into an age of oil abundance and the decline in oil production seen if because of other factors. There has been an explosion of popular literature recently predicting that oil production will peak soon, and that oil shortages will force us into major lifestyle changes in the near future- a good example of this is Heinberg (2003). The point at which oil production reaches a peak and begins to decline permanently has been referred to as ‘Peak Oil’. Predictions for when this will occur range from 2007 and 2025 (Hirsch 2005)

The Hirsch Report of 2005 concluded that it would take a modern industrial nation such as the UK or the United States at least a full decade to prepare for peak oil. Since 2005 there has been some movement towards solar and wind power together with more electric cars but nothing that deals with the scale of the problem. This has been compounded by Trump coming to power in the United States and deciding to throw the energy transition into reverse, discouraging alternative energy and expanding subsidies for fossil fuels.

What is happening how

Many factors are reported in news reports to cause changes in oil prices: supply disruptions from wars and other political factors, from hurricanes or from other random events; changes in demand expectations based on economic reports, financial market events or even weather in areas where heating oil is used; changes in the value of the dollar; reports of inventory levels, etc. these are all factors that will affect the supply and demand for oil, but they often influence the price of oil before they have any direct impact on the current supply or demand for crude oil. Last year, the main forces pushing the oil market higher were the agreement by OPEC and its partners to lower production and the growth of global demand. This year, an array of factors are pressuring the oil markets: the US sanctions that threaten to cut Iranian oil production from Venezuela. Moreover, there are supply disruptions in Libya, the Canadian tar sands, Norway and Nigeria that add to the uncertainties as does erratic policymaking in Washington, complete with threats to sell off part of the US strategic reserve and a weaker dollar. Goldman Sachs continues to expect that Brent Crude prices could retest $80 a barrel this year, but probably only late in 2018. “production disruptions and large supply shifts driven by US political decisions are the drivers of this new volatility, with demand remaining robust so far” Brent Crude is expected to trade in the $70-$80 a barrel range in the immediate future.

The OPEC

Saudi Arabia-and Russia-had started to raise production even before the 22 June 2018 meeting with OPEC that sought to address the shrinking global oil supply and rising prices. OPEC had over-complying with the cuts agreed to at the November 2016 meeting thanks to additional cuts from Saudi Arabia and Venezuela. The June 2018 22nd meeting decided to increase production to more closely reflect the production cut agreement. After the meeting, Saudi Arabia pledged a “measurable” supply boost but gave no specific numbers. Tehran’s oil minister warned his Saudi Arabian counterpart that the June 22nd revision to the OPEC supply pact do not give member countries the right to raise oil production above their targets. The Saudis, Russia and several of the Gulf Arab States increased production in June but seem reluctant to expand much further. During the summer months, the Saudis always need to burn more raw crude in their power station to combat the very high temperatures of their summer.

US Shale oil production

According to the EIA’s latest drilling productivity Report, US unconventional oil production is projected to rise by 143,000 b/d in August to 7.470 billion b/d. The Permian Basin is seen as far outdistancing other shale basins in monthly growth in August, at 73,000 b/d to 3,406 million b/d. However, drilled but uncompleted (DUC) wells in the Permian rose 164 in June to 3,368, one of the largest builds in recent months. Total US DUCs rose by 193 to 7,943 in June. US energy companies last week cut oil rigs the most in a week since March as the rate of growth had slowed over the past month or so with recent declines in crude prices. Included with other optimistic forecast for US shale oil was the caveat that the DUC production figures are sketchy as current information is difficult for the EIA to obtain with little specific data being provided to Washington by E&Ps or midstream operators. Given all the publicity surrounding constraints on moving oil from the Permian to market, the EIA admits that it “may overestimate production due to constraints.”

The Middle East and North Africa

Iran

Iran’s supreme leader, Ayatollah Ali Khamenei, called on state bodies to support the government of president Hassan Rouhani in fighting US economic sanctions. The likely return of US economic sanctions has triggered a rapid fall of Iran’s currency and protests by bazaar traders usually loyal Islamist rulers, and a public outcry over alleged price gouging and profiteering. The speech to member of Rouhani’s cabinet is clearly aimed at the conservative elements in the government who have been critical of the President and his policies of cooperation with the West and a call for unity in a time that seems likely to be one of great economic hardship spread to more than 80 Iranian cities and towns. At least 25 people died in the unrest, the most significant expression of public corruption, but the protest took on a rare political dimension, with growing number of people calling on supreme leader Khamenei to step down. Although there is much debate over the effectiveness of the impending US sanctions, some analysts are saying that Iran’s oil exports could fall by as much as two-thirds by the end of the year putting oil markets under massive strain amid supply outages elsewhere in the world. Some of the worst-case scenarios are forecasting a drop to only 700,000 b/d with most of Tehran’s exports going to China, and smaller chares going to India, Turkey and other buyers with waivers. China, the biggest importer of Iranian oil at 650,000 b/d according to Reuters trade flow data, is likely to ignore US sanctions.

Iraq

Iraq’s future is again in trouble as protests erupt across the country. These protests began in southern Iraq after the government was accused of doing nothing to alleviate a deepening unemployment crisis, water and electricity shortages and rampant corruption. The demonstrations are spreading to major population centers including Najaf and Amirah, and now discontent is stirring in Baghdad. The government has been quick to promise more funding and investment in the development of chronically underdeveloped cities, but this has done little to quell public anger. Iraqis have heard these promises countless times before, and with a water and energy crisis striking in the middle of scorching summer heat, people are less inclined to believe what their government says. The civil unrest had begun to diminish in southern Iraq, leaving the country’s oil sector shaken but secure-though protesters have vowed to return. Operations at several oil fields have been affected as international oil companies and service companies have temporality withdrawn staff from some areas that saw protests. The government claims that the production and exporting oil has remained steady during the protests. With Iran refusing to provide for Iraq’s electricity needs, Baghdad has now also turned to Saudi Arabia to see if its southern Arab neighbor can help alleviate the crises it faces.

Saudi Arabia

The IPO has been touted for the past two years as the centerpiece of an ambitious economic reform program driven by crown prince Mohammed bin Salman to diversify the Saudi economy beyond oil. Saudi Arabia expects its crude exports to drop by roughly 100,000 b/d in August as the kingdom tries to ensure it does not push oil into the market beyond its customers’ needs.

Libya

Reopened its eastern oil ports and started to ramp up production from 650,000 to 700,000 and is expected to rise further after shipments resume at eastern ports that re-opened after a political standoff.

China

China’s economy expanded by 6.7 percent its slowest pace since 2016. The pace of annual expansion announced is still above the government’s target of “about 6.5 percent” growth for the year, but the slowdown comes as Beijing’s trade war with the US adds to headwinds from slowing domestic demand. The gross domestic product had grown at 6.8 percent in the previous three quarters. Higher oil prices play a role in the slowing of demand, but the main factor is higher taxes on independent Chinese refiners, which is already cutting into the refining margins and profits of the ‘teapots’ who have grown over the past three years to account fir around fifth of China’s total crude imports. Under the stricter tax regulations and reporting mechanisms effective 1 March, however, the teapots now can’t avoid paying a consumption tax on refined oil products sales- as they did in the past three years- and their refining operations are becoming less profitable.

Russia

Russia oil production rose by around 100,000 b/d from May. From July 1-15 the country’s average oil output was 11.215 million b/d an increase of 245,000 b/d from May’s production. Amid growing speculation that President Trump will attempt to weaken US sanctions on Russia’s oil sector, US congressional leaders are pushing legislation to strengthen sanctions on Russian export pipelines and joint ventures with Russian oil and natural gas companies. Ukraine and Russia said they would hold further European Union-mediated talks on supplying Europe with Russian gas, in a key first step towards renewing Ukraine’s gas transit contract that expires at the end of next year.

Venezuela

Venezuela’s Oil Minister Manuel Quevedo has been talking about plans to raise the country’s crude oil production in the second half of the year. However, no one else thinks or claims that Venezuela could soon reverse its steep production decline which has seen it losing more than 40,000 b/d of oil production every month for several months now. According to OPEC’s secondary sources in the latest Monthly Oil Market Report, Venezuela’s crude oil production dropped in June by 47,500 b/d from May, to average 1.340 million b/d in June. During a collapsing regime, widespread hunger, and medical shortages, President Nicolas Maduro continues to grant generous oil subsidies to Cuba. It is believed that Venezuela continues to supply Cuba with around 55,000 barrels of oil per day, costing the nation around $1.2 billion per year.

Alternatives to Oil

In its search for secure, sustainable and affordable supplies of energy, the world is turning its attention to unconventional energy resources. Shale gas is one of them. It has turned upside down the North-American gas markets and is making significant strides in other regions. The emergence of shale gas as a potentially major energy source can have serious strategic implications for geopolitics and the energy industry.

Uranium and Nuclear

The nuclear industry has a relatively short history: the first nuclear reactor was commissioned in 2945. Uranium is the main source of fuel for nuclear reactors. Worldwide output of uranium has recently been on the rise after a long period of declining production caused by uranium resources have grown by 12.5% since 2008 and they are sufficient for over 100 years of supply based on current requirements.

Total nuclear electricity production has been growing during the past two decades and reached an annual output of about 2,600TWh by mid-2000s, although the three major nuclear accidents have slowed down or even reversed its growth in some countries. The nuclear share of total global electricity production reached its peak of 17% by the late 1980s, but since then it has been falling and dropped to 13.5% in 2012. In absolute terms, the nuclear output remains broadly at the same level as before, but its relative share in power generation has decreased, mainly due to Fukushima nuclear accident.

Japan used to be one of the countries with high share of nuclear (30%) in its electricity mix and high production volumes. Today, Japan has only two of its 54 reactors in operation. The rising costs of nuclear installations and lengthy approval times required for new construction have had an impact on the nuclear industry. The slowdown has not been global, as new countries, primarily in the rapidly developing economies in the Middle East and Asia, are going ahead with their plans to establish a nuclear industry.

Hydro Power

Hydro power provides a significant amount of energy throughout the world and is present in more than 100 countries, contributing approximately 15% of the global electricity production. The top 5 largest markets for hydro power in terms of capacity are Brazil, Canada, China, Russia and the United States of America. China significantly exceeds the other, representing 24% of global installed capacity. In several other countries, hydro power accounts for over 50% of all electricity generation, including Iceland, Nepal and Mozambique for example. During 2012, an estimated 27-30GW of new hydro power and 2-3GW of pumped storage capacity was commissioned.

In many cases, the growth in hydro power was facilitated by the lavish renewable energy support policies and CO2 penalties. Over the past two decade the total global installed hydro power capacity has increased by 55%, while the actual generation by 21%. Since the last survey, the global installed hydro power capacity has increased by 8%, but the total electricity produced dropped by 14%, mainly due to water shortages.

Solar PV

Solar energy is the most abundant energy resource and it is available for use in its direct (solar radiation) and indirect (wind, biomass, hydro, ocean etc.) forms. About 60% of the total energy emitted by the sun reaches the Earth’s surface. Even if only 0.1% of this energy could be converted at an efficiency of 10%, it would be four times larger than the total world’s electricity generating capacity of about 5,000GW. The statistics about solar PV installations are patchy and inconsistent. The table below presents the values for 2011 but comparable values for 1993 are not available.

The use of solar energy is growing strongly around the world, in part due to the rapidly declining solar panel manufacturing costs. For instance, between 2008-2011 PV capacity has increased in the USA from 1,168MW to 5,171MW, and in Germany from 5,877MW to 25,039MW. The anticipated changes in national and regional legislation regarding support for renewables is likely to moderate this growth.

Conclusion

The rapid consumption of fossil fuels has contributed to environmental damage, the use of these fuels including oil releases chemicals that contribute to smog, acid rain, mercury contamination and carbon dioxide emissions from fossil fuel consumption are the main drivers of climate change, the effects of which are likely to become more and more severe as temperature rise. The depletion of oil and other fossil resources leaves less available to future generations and increases the likelihood of price spikes if demand outpaces supply.

One of the most intriguing conclusions from this idea is that this new “age of abundance” could alter behavior from oil producers. In the past some countries (notably OPEC members) restrained output husbanding resources for the future, betting that scarcity would increase the value of their holdings over time. However, if a peak in demand looms just over the horizon, oil producers could rush to maximize their production in order to get as much value for their reserves while they can. Saudi oil minister Sheikh Ahmed Zaki Yamani was famously quoted as saying, “the Stone Age didn’t end for lack of stone, and the oil age will end long before the world runs out of oil.” This quote reflects the view that the development of new technologies will lead to a shift away from oil consumption before oil resources are fully depleted. Nine of the ten recessions between 1946 and 2005 were preceded by spikes in oil prices and the latest recession followed the same pattern.

Extending the life of oil fields, let alone investing in new ones, will require large volumes of capital, but that might be met with skepticism from wary investors when demand begins to peak. It will be difficult to attract investment to a shrinking industry, particularly if margins continued to get squeezed. Peak demand should be an alarming prospect for OPEC, Russia and the other major oil producing countries. Basically, any and all oil producers who will find themselves fighting more aggressively for a shrinking market.

The precise data at which oil demand hits a high point and then enters into decline has been the subject of much debate, and a topic that has attracted a lot of interest just in the last few years. Consumption levels in some parts of the world have already begun to stagnate, and more and more automakers have begun to ratchet up their plans for electric vehicles. But the exact date the world will hit peak demand misses the whole point. The focus shouldn’t be on the date at which oil demand peaks, but rather the fact that the peak is coming. In other words, oil will be less important when it comes to fueling the global transportation system, which will have far-reaching consequences for oil producers and consumers alike. The implications of a looming peak in oil consumptions are massive. Without an economic transformation, or at least serious diversification, oil-producing nations that depend on oil revenues for both economic growth and to finance public spending, face an uncertain future.

2018-9-21-1537537682

Water purification and addition of nutrients as disaster relief: college application essay help

1. Introduction

1.1 Natural Disasters

Natural disasters are naturally occurring events that threaten human lives and causes damage to property. Examples of natural disasters include hurricanes, tsunamis, earthquakes, volcanic eruptions, typhoons, droughts, tropical cyclones and floods. (Pask, R., et al (2013)). They are inevitable and oftentimes, can cause calamitous implications such as water contamination and malnutrition, especially to developing countries like the Philippines, which is particularly prone to typhoons and earthquakes. (Figure 1)

Figure 1 The global distribution of natural disaster risk (The United Nations University World Risk Index 2014)

1.1.1 Impacts of Natural Disaster

The globe faces impacts of natural disasters on human lives and economy on an astronomical scale. According to a 2014 report by the United Nations, since 1994, 4.4 billion people have been affected by disasters, which claimed 1.3 million lives and cost US$2 trillion in economic losses. Developing countries are more likely to suffer a greater impact from natural disasters than developed countries as natural disasters affect the number of people living below the poverty line, and increase their numbers by more than 50 percent in some cases. Moreover, it is expected that by 2030, up to 325 million extremely poor people will live in the 49 most hazard-prone countries. (Child Fund International. (2013, June 2)) Hence, it necessitates the need for disaster relief to save the lives of those affected, especially those in developing countries such as the Philippines.

1.1.2 Lack of access to clean water

After a natural disaster strikes, severe implications such as water contamination occurs.

Besides, natural disasters know no national borders of socioeconomic status. (Malam, 2012) For example, Hurricane Katrina, which struck New Orleans, a developed city, destroyed 1,200 water systems, and 50% of existing treatment plants needed rebuilding afterwards. (Copeland, 2005) This led to the citizens of New Orleans having a shortage of drinking water. Furthermore, after the 7.0 magnitude earthquake that struck Haiti, a developing country, in 2012, there was no plumbing left underneath Port-Au-Prince, and many of the water tanks and toilets were destroyed. (Valcárcel, 2010) These are just some of the many scenarios of can bring about water scarcity.

The lack of preparedness to prevent the destruction caused by the natural disaster and the lack of readiness to respond claims to be the two major reasons for the catastrophic results of natural disasters. (Malam, 2012) Hence, the aftermath of destroyed water systems and a lack of water affect all geographical locations regardless of its socioeconomic status.

1.2 Disaster relief

Disaster relief organisations such as The American Red Cross help countries that are recovering from natural disasters by providing these countries with the basic necessities.

After a disaster, the Red Cross works with community partners to provide hot meals, snacks and water to shelters or from Red Cross emergency response vehicles in affected neighborhoods. (Disaster Relief Services | Disaster Assistance | Red Cross.)

The International Committee of the Red Cross/Red Crescent (ICRC) reported that its staff had set up mobile water treatment units. These were used to distribute water to around 28,000 people in towns along the southern and eastern coasts of the island of Samar, and to other badly-hit areas including Basey, Marabut and Guiuan. (Pardon Our Interruption. (n.d.))

Figure 2: Children seeking help after a disaster(Pardon Our Interruption. (n.d.))

Figure 3: Massive Coastal Destruction from Typhoon Haiyan (Pardon Our Interruption. (n.d.))

1.3 Target audience: Tacloban, Leyte, The Philippines

As seen in figures 4 and 5, Tacloban is the provincial capital of Leyte, a province in the Visayas region in the Philippines. It is the most populated region in the Eastern Visayas region, with a total population of 242,089 people as of August 2015. (Census of Population, 2015)

Figure 4: Location of Tacloban in the Philippines (Google Maps)

Figure 5: Location of Tacloban in the Eastern Visayas region (Google Maps)

Due to its location on the Pacific Ring of Fire (Figure 6), more than 20 typhoons (Lowe, 2016) occur in the Philippines each year.

Figure 6: The Philippines’ position on the Pacific Ring of Fire (Mindoro Resources Ltd., 2004)

In 2013, Tacloban was struck by Super Typhoon Haiyan, locally known as ‘Yolanda’. The Philippine Star, a local digital news organisation, reported more than 30,000 deaths from that disaster alone. (Avila, 2014) Tacloban is in shambles after Typhoon Haiyan and requires much aid to restore the affected area, especially when the death toll is a whopping five figure amount.

1.4 Existing measures and their gaps

Initially, there was a slow response of the government to the disaster. For the first three days after the typhoon hit, there was no running water and dead bodies were found in wells. In desperation for water to drink, some even smashed pipes of the Leyte Metropolitan Water District. However, even when drinking water was restored, it was contaminated with coliform. Many people thus became ill and one baby died of diarrhoea. (Dizon, 2014)

Long response-time by the government, (Gap 1) and further consequences were borne by the restoration of water brought (Gap 2). The productivity of people was affected and hence there is an urgent need for a better solution to the problem of late restoration of clean water.

1.5 Reasons for Choice of Topic

There is high severity since ingestion of contaminated water is the leading cause of infant mortality and illness in children (International Action, n.d.) and more than 50% of the population is undernourished. (World Food Programme, 2016). Much support and humanitarian aid has been given by organisations such as World Food Programme and The Water Project, yet more efforts are needed to lower the death rates, thus showing the persistency. It is also an urgent issue as malnourishment mostly leads to death and the children’s lives are threatened.

Furthermore, 8 out of 10 of the world’s cities most at risk to natural disasters are in the Philippines. (Reference to Figure _)Thus, the magnitude is huge as there is high frequency of natural disasters. While people are still recovering from the previous one, another hit them, thus worsening the already severe situation.

Figure _ Top 5 Countries of World Risk Index of Natural Disasters 2016 (Source: UN)

WWF CEO Jose Maria Lorenzo Tan said that “on-site desalination or purification” would be a cheaper and better solution to the lack of water than shipping in bottled water for a long period of time. (Dizon, 2014) Instead of relying on external humanitarian aid, which might incur a higher amount of debt as to relying on oneself for water, this can cushion the high expenses of rebuilding their country. Hence, there is a need for a water purification plant that provides potable water immediately when a natural disaster strikes. The plant will also have to provide cheap and affordable water until water systems are restored back to normal.

Living and growing up in Singapore, we have never experienced natural disasters first hand. We can only imagine the catastrophic destruction and suffering that accompanies natural disasters. With “Epione Solar Still” (named after the greek goddess of the Soothing of Pain), we hope to be able to help many Filipinos access clean and drinkable water, especially children who clearly do not deserve to experience such tragedy and suffering.

1.6 Case study: Disaster relief in Japan

Located at the Pacific Ring of Fire, Japan is vulnerable to natural disasters such as earthquakes, tsunami, volcanic eruptions, typhoons, floods and mudslides due to its geographical location and natural conditions. (Japan Times, 2016)

In 2011, an extremely high 9.0 magnitude earthquake hit Fukushima, causing a tsunami that destroyed the northeast coast and killed 19,000 people. It was the worst-hit earthquake in Japan in history, and it damaged the Fukushima plant and caused nuclear leakage, leading to contaminated water which currently exceeds 760,000 tonnes. (The Telegraph, 2016) The earthquake and tsunami caused a nuclear power plant to fail, and radiation to leak into the ocean and escape into the atmosphere. Many evacuees have still not returned to their homes, and, as of January 2014, the Fukushima nuclear plant still poses a threat, according to status reports by the International Atomic Energy Agency. (Natural Disasters & Pollution | Education – Seattle PI. (n.d.))

Disaster Relief

In the case of major disasters, the Japan International Cooperation Agency (JICA) deploys Japan Disaster Relief (JDR) teams, consisting of the rescue, medical, expert and infectious disease response teams and also the Self-Defence Force (SDF) to provide relief aid to affected countries. It provides emergency relief supplies such as blankets, tents and water purifiers and some are also stockpiled as reserved supplies in places closer to disastrous areas in case disasters strike there and emergency disaster relief is needed. (JICA)

For example during the Kumamoto earthquake in 2016, 1,600 soldiers had joined the relief and rescue efforts. Troops were delivering blankets and adult diapers to those in shelters. With water service cut off in some areas, residents were hauling water from local offices to their homes to flush toilets. (Japan hit by 7.3-magnitude earthquake | World news | The Guardian. (2016, April 16))

Solution to Fukushima water contamination

Facilities are used to treat contaminated water. The main one is the Multi-nuclide Removal Facility (ALPS) (Figure _), which could remove most radioactive materials except Tritium. (TEPCO, n.d)

Figure _: Structure of Multi-nuclide Removal Facility (ALPS) (TEPCO, n.d)

1.7 Impacts of Case Study

The treatment of contaminated water is very effective as more than 80% of contaminated water stored in tanks has been decontaminated and more than 90% of radioactive materials has been removed during the process of decontamination by April 2015. (METI, 2014)

1.8 Lessons Learnt

Destruction caused by natural disasters results in a lack of access to clean and drinkable water (L1)

Advancements in water purification technology can help provide potable water for the masses. (L2)

Natural disasters weaken immune systems, people are more vulnerable to the diseases (L3)

1.9 Source of inspiration

Suny Clean Water’s solar still, is made with cheap material alternatives, which would help to provide more affordable water for underprivileged countries.

A fibre-rich paper is coated with carbon black(a cheap powder left over after the incomplete combustion of oil or tar) and layered over each section of a block of polystyrene foam which is cut into 25 equal sections. The foam floats on the untreated water, acting as an insulating barrier to prevent sunlight from heating up too much of the water below. Then, the paper wicks water upward, wetting the entire top surface of each section. This causes a clear acrylic housing to sit atop the styrofoam. (Figure _)

Figure _: How fibre-rich paper coated with carbon black is adapted into the solar still. (Sunlight-powered purifier could clean water for the impoverished | Science | AAAS. (2017, February 2)

It is estimated that the materials needed to build it cost roughly $1.60 per square meter, compared with $200 per square meter for commercially available systems that rely on expensive lenses to concentrate the sun’s rays to expedite evaporation.

1.10 Application of Lessons Learnt

Gaps in current measures

Learning points

Applications to project

Key features in proposal

Developing countries lack the technology / resources to treat their water and provide basic necessities to their people.

Advanced technology can provide potable water readily. (L2)

Need for technology to purify contaminated water.

Solar Distillation Plant

Even with purification of water, problem of malnutrition which is worsened by natural disasters, is still unsolved.

Solution to provide vitamins to young children to boost immunity and lower vulnerability to diseases and illnesses. (L3)

Need for nutrient-rich water.

Nutrients infused into water using concept of osmosis.

Even with the help of external organisations, less than 50% of households have access to safe water.

Clean water is still inaccessible to some people. (L1)

Increase accessibility to water.

Evaporate seawater (abundant around Phillipines) in solar still. (short-term solution)

Figure _: Table of application of lessons learnt

2. Project Aim and Objectives

2.1 Aim

Taking into account the loopholes that exist in current measures adopted to improve water purification to reduce water pollution and malnutrition in Ilocos Norte, our project proposes a solution to provide Filipinos with clean water by creating an ingenious product, the Epione Solar Still. The product makes use of natural occurrences (evaporation of water), and adapts and incorporates the technology and mechanism behind the kidney dialysis machine to provide Filipinos with nutrient-enriched water without polluting their environment. The product will be located near water bodies where seawater is abundant to act as a source of clean water to the Filipinos.

2.2 Objectives of Project

To operationalise our aim, our objectives are to:

Design “Epione Solar Still”

Conduct interviews with:

Masoud Arfand, from Department of Mechanical Engineering, Najafabad Branch, Islamic Azad University to determine the projected percentage of water that Epione Solar Still can produce and the number of people it can provide for.

Qiaoqiang Gan, electrical engineer from Sunny Clean Water (his team innovated the technology of using fibre-rich paper is coated with carbon black to make process of water purification using the soalr still faster and more cost-friendly) to determine amount of time Epione Solar Still needed to produce sufficient water needed to support Fillipinos in Tacloban, Leyte as Epione Solar Still is a short-term disaster relief solution.

Dr Nathan Feldman, Co-Founder of HopeGel, of EB Performance, LLC to determine significant impact of nutrients-infused water to boost immunity of victims of natural disaster. (Project Medishare, n.d)

Review the mechanism and efficiency of using a solar still to source clean and nutrient-rich water for Filipinos.

3. Project Proposal

Investment into purification of water contamination in the form of disaster relief, which can provide Filipinos with nutrients to boost their immunity in times of disaster and limit the number of deaths that occur due to the consumption of contaminated water during a crisis.

3.1 Overview of Project

Our group proposes to build a solar distillation plant (Figure _) within a safe semi-underground bunker. The bunker will contain a generator to power certain parts of the plant. Then, seawater will be fed into the still via underground pipes from the sea surrounding the southern part of Tacloban. The purified water produced by the distillation process will be infused with nutrients to boost the immunity of disaster victims once consumed. Hence, not only will our distillation plant be able to produce potable water, it will also be nutritious so as to boost victims’ immunity in times of natural calamities. Potable water will then be distributed in drums and shared among Filipinos using .

Figure _: Mechanism of our solar distillation plant, Epione Solar Still

3.2 Phase 1: Water Purification System

3.2.1 Water extraction from the sea

Still is located near the sea where seawater is abundant. Seawater is extracted from low-flow open sea (Figure _) and then pumped into our solar still.

Figure _: Intake structure of seawater (Seven Seas Water Corporation, n.d.)

3.2.2 Purification of Seawater

Solar energy heats up the water in the solar still. The water evaporates, and condenses on the cooler glass surface of the ceiling of the still. Pure droplets of water slide down the glass and into the collecting basin, where nutrients will diffuse into the water.

Figure 6: Mechanism of Epione Solar Still

3.3 Phase 2: Nutrient Infuser

Using the concept of reverse osmosis (Figure _), a semi permeable membrane separates the nutrients and newly purified water, allowing the vitamins and minerals to diffuse into the condensed water. The nutrient-infused water will be able to provide nourishment, thus making the victims of natural disaster less vulnerable and susceptible to illnesses and diseases due to a stronger immune system. This will help the Filipinos in Tacloban, Leyte quickly get back on their feet after a natural disaster and minimise the death toll as much as possible after a natural disaster befalls.

Figure _: How does reverse osmosis work (Water Filter System Guide, n.d.)

Nutrient / Mineral

Function

Upper Tolerable Limit (The highest amount that can be consumed without health risks)

Vitamin A

Helps to form and maintain healthy teeth, bones, soft tissue, mucus membranes and skin.

10,000 IU/day

Vitamin B3 (Niacin)

Helps maintain healthy skin and nerves

Has cholesterol-lowering effects

35 mg/day

Vitamin C

(Ascorbic acid, an antioxidant)

Promotes healthy teeth and gums.

Helps the body absorb iron and maintain healthy tissue.

Promotes wound healing.

2,000 mg/day

Vitamin D

(Also known as “sunshine vitamin”, made by the body after being in the sun).

Helps body absorb calcium.

Helps maintain proper blood levels of calcium and phosphorus

1,000 micrograms/day (4,000 IU)

Vitamin E

(Also known as tocopherol, an antioxidant)

Plays a role in formation of red blood cells.

1,500 IU/day

Figure _: Table of functions and amount of nutrients that will be diffused into our Epione water. (WebMD, LLC, 2016)

3.4 Phase 3: Distribution of water to households in Tacloban, Leyte

Potable water will be collected into drums (Figure _) of 100 litres in capacity each, which would suffice 50 people since the average intake of water is 2 litres per person per day. These drums will then be distributed to the tent cities in Tacloban, Leyte, our targeted area, should a natural disaster befall. Thus, locals will get potable water within their reach, which is extremely crucial for their survival in times of natural calamities.

Figure _: Rain barrels will be used to store the purified and nutrient-infused water (Your Easy Garden, n.d.)

3.5 Stakeholders

3.5.1 The HopeGel Project

HopeGel is a nutrient and calorie-dense protein gel designed to aid children suffering from malnutrition caused by severe food insecurity brought upon by draughts (Glenroy Inc., 2014). HopeGel has been distributed in Haiti where malnutrition is the number one cause of death among children under five mainly due to the high frequency of natural disasters that has caused much destruction to the now impoverished state of Haiti. (Figure _) The implementation of Epione Solar Still by this company helps it achieve its objective to address the global issue of severe acute malnutrition in children as most victims of natural disasters lack the nourishment they need (HopeGel, n.d.)

Figure _: HopeGel, a packaged nutrient and calorie-dense protein gel (Butschli, HopeGel, n.d.)

3.5.2 Action Against Hunger (AAH)

Action Against Hunger is a relief organisation that develops and carries out programme for countries in need regarding nutrition, health, water and food security (Action Against Hunger, n.d) (Figure _). AAH also provides programs to be better prepared for disasters which aims to anticipate and prevent humanitarian crisis (GlobalCorps, n.d.) With 40 years of expertise, helping 14.9 million people across more than 45 countries, AAH is no stranger to humanitarian crises. The implementation of Epione Solar Still by this company helps it achieve its aim of saving lives by extending help to Fillipinos in Tacloban, Leyte suffering from deprivation of a basic need due to water contamination caused by disaster relief through purifying and infusing nutrients into seawater.

Figure _: Aims and Missions of Action Against Hunger (AACH, n.d.)

2017-7-11-1499736147

Analyse the use of ICTS in a humanitarian emergency

INTRODUCTION

The intention of writing this essay is to analyse the use of ICTS in a humanitarian emergency. The specific case study we have discuss in this essay is Multi-level functionality of social media in the aftermath of the Great East Japan Earthquake written by Jung, J., and Moro, M. 2014. This report emphasis on the benefits of social media networks like twitter and face book can be used to spread and gather important information in emergency situations rather than solely utilised as a social network platform. ICTs has changed the way humans gather information during the disasters and social media specially twitter became important source of information in these disasters.

Literature Review

The case studies of using ICTs in a humanitarian emergency can have either technically rational perspective or socially embedded perspective. Technically rational perspective means what to do and how to achieve the given purpose, it is a prescription for design and action. Socially embedded means it focuses on the particular case and process of work is affected by the culture, area and human nature. In this article, we have examined different humanitarian disasters cases in which ICTS played a vital role to see if author consider technically rational perspective or socially embedded perspective.

In the article “Learning from crisis: Lessons in human and information infrastructure from the World Trade Centre response” by (Dawes, Cresswell et al. 2004) author adopts technical/rational perspective. 9/11 was very big incident and no one was ready to deal with this size of attack but as soon as it happened procedure start changing rapidly. Government, NGO and disaster response unit start learning and made new prescription, which can be used universally and in any size of disaster. For example, the main communication structure was damaged which was supplied by Verizon there were different communication suppliers suppling their services but they all were using the physical infrastructure supplied by Verizon. So VOIP was used for communication between government officials and in EOC building. There were three main areas where the problems were found and then new procedure adopt in the response of disaster. The three main areas were technology, information and inter layered relationships between the Ngo’s, Government and the private sector. (Dawes, Cresswell et al. 2004).

In the article “Challenges in humanitarian information management and exchange: Evidence from Haiti,” (Altay, Labonte 2014) author adopts socially embedded perspective. Haiti earthquake was one of the big disaster killing 500000 people and displacing at least 2 million. Around 2000 organisation went in for help but there was no coordination between NGO`s and government for the humanitarian response. Organisation didn’t consider local knowledge they assumed that there is no data available. All the organisations had different standards and ways to do work so no one followed any prescription. Technical aspect of HIME (humanitarian information management and exchange) wasn’t working because all the members of humanitarian relief work wasn’t sharing any humanitarian information. (Altay, Labonte 2014)

In the article, Information systems innovation in the humanitarian sector,” Information Technologies and International Development” (Tusiime, Byrne 2011) author adopts socially embedded perspective. Local staff was hired. They didn’t have any former experience or knowledge to work with such a technology, which slow down the process of implementing new technology. Staff wanted to learn and use new system but the changes were done on such a high pace that made staff overworked and stress, which made them loose the interest in the innovation. The management decided to use COMPAS as a new system without realizing that it’s not completing functional and it still have lots of issues but they still went ahead with it. When staff start using and found the problems and not enough technical support was supplied then they didn’t have any choice and they went back to old way of doing things (Tusiime, Byrne 2011). The whole process was effected by how the work is done in specific area and people behaviours.

In the article “Multi-level functionality of social media in the aftermath of the Great East Japan Earthquake,” (Jung, Moro 2014) author adopts technically rational perspective. In any future humanitarian disaster situation, social media can be used as an effective source of communication method conjunction with mass media. After the disaster twitter was used more as a spreading and gathering information source instead of using as social media platform.

In the article “Information flow impediments in disaster relief supply chains,” Journal of the Association for Information Systems,10(8), pp. 637-660.(Day, Junglas et al. 2009) author proposed development of IS for information sharing based on hurricane Katrina. Author adopted TR perspective because need of IS development for information flow within and outside of organisation is essential. This developed IS will help to manage complex supply chain management. Supply chain management in disaster situation is challenging as compare to traditional supply chain management. Supply chain management IS should be able to cater all types of dynamic information, suggested Day, Juglas and Silva (2009).

Case study Description:

On the 11 march 2011 at the scale of 9.00 magnitude hit north-eastern part of japan. This was followed by tsunami. Thousands of people lost their lives and infrastructure was completely damaged in that area (Jung, Moro 2014). Tsunami wiped off two towns of the maps and the costal maps had to be redrawn (Acar, Muraki 2011). On the same day of earth quake cooling system in nuclear reactor no 1 in Fukushima failed because of that nuclear accident Japanese government issued nuclear emergency. On the evening of the earthquake Japanese government issued evacuation order for 3 km area around reactor (Jung, Moro 2014). On March 12 hydrogen explosion occurred in the nuclear reactor because of failed cooling system which is followed by another explosion after 2 days on March 14. The area of evacuation was 3 km in the start but was increased to 20 km so avoid any nuclear radiation. This was one of the big nuclear disaster for the country so it was hard for the government to access the scale of the disaster. As the government officials, didn’t came across this kind situation before and couldn’t estimate the damage occurred because of incident. Government officials were adding more confusion in people with their unreliable information. They declare the accident level as 5 on the international nuclear scale but later they changed it to 7 which was highest on international nuclear scale. Media reporting was also confusing the public. The combination of contradicting information from government and media increase the level of confusion in the public. In the case of disaster Mass media is always the main source of information normally they discontinue their normal transmission and focus on the disaster. Their most of the airtime is devoted for the disaster so they can keep the people update about the situation. Normally mass media provides very reliable information in humanitarian disaster situation but in the case of japan disaster media was contradicting each other news e.g. international media was contradicting the news from local media as well as local government so people start losing faith in the mass media and start relying on different source to get information. Second reason was that the mass media was traditional way of gathering information and because of changes in technology people start using mobile phone and internet. Third main reason people start looking to get the information from different mean because the infrastructure for mass media was damage and lot of people cannot access the services of Television, so they start depending on video streaming sites e.g. ustream and YouTube. People start using twitter on big scale to spread and gather news. There was 30 percent of users increased on twitter within first week of disaster and 60 percent of twitter user thinks that it was useful for gather or spread information.

Case Study Analysis:

Twitter is one of the social media platform and micro blogging website, you can have 140 character in one tweet. It is different from other social media plate form because any one can follow you and they don’t need your authorization. Only register member can tweet but to read a message registration is not required. The author of “Multi-level functionality of social media in the aftermath of the Great East Japan Earthquake,” (Jung, Moro 2014) discuss about the five functionalities of twitter by the help of conceptual model of multi-level social media. The following figure describes the five primary function model in twitter very clearly.

Fig No 1 Source: (Jung, Moro 2014)

The five functionality was derived on survey and review of selected twitter timelines.

The first function was having tweets between individual it’s also known as interpersonal communication with others. It is micro level of conceptual model, in this level people from country and outside of a country were connecting other people who were is the affected area. The most of tweets were for checking safety of people that they are safe after the disaster, to inform your love ones if you were at affected area and needs any help or to inform people that you are safe. In the first three days high percentage of tweets were from micro level communication channel.

The second function was having communication channel for local organisation, local government and local media. It is meso level of conceptual model in this communication channel local governments open new accounts and re activate accounts which wasn’t used for a while to keep their local residents informed, the follower of twitter accounts increased very fast. People have understand the importance of social media and benefits of it after the disaster when the infrastructure was damaged and they were having electricity cut out but they were still able to get the information about the disasters and tsunami warnings. Local government and local media used twitter accounts to give different alerts and news e.g. the alert of tsunami was issued on twitter and after tsunami the reports of damage was released on twitter. Local media open new twitters channels and kept people informed about situation. Different organisation e.g. embassies of different countries used twitter to keep their nationals informed about situation about disaster and this was best way of communication between embassies and their nationals. Nationals can even let their embassy that they are struck in affected area and they need any help because they can be in very vulnerable situation as they are not in their country.

The third function was having communication of Mass media which is known as Macro level. Mass media used social platform to broadcast their news because the infrastructure was damage and people in effected area couldn’t access their broadcast. There were some people who were not in the country so they couldn’t access the local mass media news on television so they watching news on video streaming website as the demand increased most of mass media open the accounts on social media to fulfil the requirements. They start broadcasting their news on video streaming websites like YouTube, Ustream. Mass media was giving news updates several times a day on twitter as well and lot of people who were reading it also was retweeting them so information was spreading on very high speed.

The fourth function was information sharing and gathering which is known as cross level. Individual used social media to get the information about earthquake, tsunami and nuclear accident. When someone try to find information they come across the tweets which were for micro level, meso level and macro level. This level is great use when you are looking for help and you want to know different people opinions if they were in that situation what would they have done. The research done on the twitter time line proofs that on the day of earthquake people were tweeting regarding the shelters available and information about transport (Jung, Moro 2014).

The fifth function was direct channels between individuals and the mass media, government and the public. This is also consider as cross level. In this level individual could inform government and mass media about the situation of effected area because of disaster there were some places where government and mass media couldn’t reach, so they didn’t know the situation. Mayor of Minami-soma city which was 25 miles away from Fukushima used you tube to tell the government the threat of radiation to his city, the video went viral and Japanese government have international pressure to evacuate the city. (Jung, Moro 2014)

Reflection:

There was gradually change in use of social media to use a communication tool instead of social media platform in event of disaster. The multi-level functionality is one of the important characteristic which connects it very well with existing media. This is complete prescription which can be used in and after any kind of disaster. Social media can be used with other media as an effective communication methods to prepare for emergency in any future disaster situation.

Twitter played a big role in the communication in the disaster in japan. It was used to spread information, gather information about earthquake, tsunami and nuclear reactor accident. It was used to help request, issue warning about earthquake, tsunami and nuclear reactor accident. It was also used for condolences. Twitter has lot of benefits but it has some drawbacks which has to be rectify. The biggest issue in tweets are unreliability, anyone can tweet any information and there is no check and balance on it, only the person who do that tweet is responsible for the authentic information. There is no control on false information and it spreads so fast that it can create anxiety in people because of contradicted information e.g. if the false information about the range of radiation was released by individual and retweets by other individual who didn’t had any knowledge about the effect of radiation and nuclear accident it would had caused a panic in people. In the case of disaster, it is very important that reliable and right information is released.

Information system can play vital role in humanitarian disasters in all aspects. It can be used in the better communication, it can be used to improve the efficiency and accountability of the organisation. The data will be available widely in the organisation so it can have monitoring on the finances. It helps to coordinate different operation in organisations e.g. transport, supply chain management, logistics, finance and monitoring.

Social media has played a significant role in communicating, disseminating and storing data related to disasters. There is a need of control of that information being spread over the social media since not all type of information is authentic or verified.

IS based tools needs to be developed for disaster management in order to get best result from varied range of data extracted from social media and take necessary action for the wellbeing of people in disaster area.

The outcome of using purpose built IS, will be supportive in making decisions to develop strategy to deal with the situation. Disaster management team will be able to analyse the data in order to train the team for a disaster situation.

2017-1-12-1484253744

Renewable energy in the UK: essay help

The 2014 IPCC report stated that anthropogenic emissions of greenhouse gases have led to unprecedented levels of carbon dioxide, methane and nitrous oxide in the environment. The report also stated that the effect of greenhouse gases is extremely likely to have caused the global warming we have witnessed since the 20th century.

The 2018 IPCC report set new targets, aiming to limit climate change to a maximum of 1.5°C. To reach this, we will need zero CO₂ emissions by the year 2050. Previous IPCC targets of 2°C change allowed us until roughly 2070 to reach zero emissions. This means government policies will have to be reassessed and current progress reviewed in order to confirm whether or not the UK is capable of reaching zero emissions by 2050 on our current plan.

Electricity Generation

Fossil fuels are natural fuels formed from the remains of prehistoric plant and animal life. Fossil fuels (coal, oil and gas) are crucial in any look at climate change as when burned they release both carbon dioxide (a greenhouse gas) and energy. Hence, in order to reach the IPCC targets the UK needs to drastically reduce its usage of fossil fuels, either through improving efficiency or by using other methods of energy generation.

Whilst coal is a cheap energy source used to generate approximately 40% of the world’s electricity , it’s arguably the most damaging to the environment as coal releases more energy into the atmosphere in relation to energy produced than any other fuel source. Coal power stations generate electricity by burning coal in a combustion chamber and using the heat energy to transform water to steam which turns the propeller-like blades within the turbine. A generator (consisting of tightly-wound metal coils) is mounted at one end of the turbine and when rotated at a high velocity through a magnetic field, generates electricity. However the UK has made great claims to fully eradiate the use of coal in electricity generation by 2025. These claims are well substantiated by the UK’s rapid decline in coal use. In 2015 coal accounted for 22% of electricity generated in the UK, this was down to only 2% by the second quarter of 2017 and in April 2018 the UK even managed to go 72 hours powered without coal.

Natural gas became a staple of British electrical generation in the 1990s, when the Conservative Party got into power and privatised the electrical supply industry. The “Dash for gas” was triggered by legal changes within the UK and EU allowing for greater freedom to use gas in electricity generation.

Whilst natural gas emits less CO₂ than coal, it emits far more methane. Methane doesn’t remain in the atmosphere as long but it traps heat to a far greater extent. According to the World Energy Council methane emissions trap 25 times more heat than CO₂ over a 100 year timeframe.

Natural gas produces electrical energy in a gas turbine. Natural gas is mixed with the hot air and burned in a combustor. The hot gas then pushes turbine blades and as in coal plant, the turbine is attached to a generator, creating electricity. Gas turbines are hugely popular as they are a cheap source of energy generation and they can quickly be powered up to respond to surges in electrical demand.

Combined Cycle Gas Turbines (CCGT) are an even better source of electrical generation. Whilst traditional gas turbines are cheap and fast-reacting, they only have an efficiency of approximately 30%. Combined cycle turbines, however, are gas turbines used in combination with steam turbines giving an efficiency of between 50 and 60%. The hot exhaust from the gas turbine is used to create steam which rotates turbine blades and a generator in a steam turbine. This allows for greater thermal efficiency.

Nuclear energy is a potential way forward as no CO₂ is emitted by Nuclear power plants. Nuclear plants aim to capture the energy released by atoms undergoing nuclear fission. In nuclear fission, nuclei absorb neutrons as they collide thus making an unstable nucleus. The unstable nucleus will then split into fission products of smaller mass and emit two or three high speed neutrons which can then collide with more nuclei, making them unstable thus creating a chain reaction. The heat energy produced by splitting the atom is first converted can be used to produce steam which will be used by a turbine generator to produce electricity.

Currently, 21% of electricity generated in the UK comes from nuclear energy. In the 1990s, 25% of electricity came from nuclear energy but gradually old plants have been retired. By 2025, UK nuclear power could half. This is due to a multitude of reasons. Firstly, nuclear fuel is expensive in comparison to gas and coal. Secondly, nuclear waste is extremely radioactive and so must be dealt with properly. Also, in light of tragedies such as Chernobyl and Fukushima, much of the British public expressed concerns surrounding Nuclear energy with the Scottish government refusing to open more plants

In order to lower our CO₂ emissions it is crucial we also utilise renewable energy. The UK currently gets very little of its energy from renewable sources but almost all future plans place a huge emphasis on renewables.

The UK has great wind energy potential as the nation is the windiest country in the EU with 40% of the total wind that blows across the EU.

Wind turbines are straightforward machinery; the wind turns the turbine blades around a rotor which is connected to the main shaft which spins a generator, creating electricity. In 2017, onshore wind generated enough energy to power 7.25 million homes a year and generated 9% of the UK’s electricity. However, despite the clear benefits of clean, renewable energy, wind energy is not without its problems. Firstly, it is an intermittent supply – the turbine will not generate energy when there is no wind. Also it has been opposed by members of the public for affecting the look of the countryside and bird fatalities. These problems are magnified by the current conservative government’s stance on wind energy who wish to limit onshore wind farm development despite public opposition to this “ban”.

Heating and Transport

Currently it is estimated a third of carbon dioxide (CO2) emissions in the UK are accounted for in the heating sector. 50% of all heat emissions in the UK exist for domestic use, consequently making it the main source of CO2 emissions in the heating sector. Around 98% of domestic heating is used for space and water heating. The government has sought to reduce the emissions from domestic heating alone by issuing a series of regulations on new boilers. Regulations state as of 1st April 2005 all new installations and replacements of boilers are required to be condensing boilers. As well as CO2 emissions being much lower, condensing boilers are around 15-30% more efficient than older gas boilers. Reducing heat demand has also been an approach taken to reduce emissions. For instance, building standards in the UK have set higher levels of required thermal insulations of both domestic and non-domestic buildings when refurbishing and carrying out new projects. These policies are key to ensure that both homes are buildings in industry are as efficient as possible when it comes to conserving heat.

Although progress is being made in terms of improving current CO2 reducing systems, the potential for significant CO2 reductions rely upon low carbon technologies. Highly efficient technologies such as the residential heat pump and biomass boilers have the potential to be carbon neutral sources of heat and in doing so could massively reduce CO2 emissions for domestic use . However, finding the best route to a decarbonised future in the heating industry relies upon more than just which technology has the lowest carbon footprint. For instance, intermittent technologies such as solar thermal collectors cannot provide a sufficient level of heat in the winter and require a back-up source of heat making them a less desirable source of heat . Cost is also a major factor in consumer preference. For most consumers, a boiler is the cheapest option for heating. This provides a problem for low carbon technologies which tend to have significantly higher upfront costs . In response to the cost associated with these technologies, the government has introduced policies such as the ‘Renewable Heat Incentive’ which aims to alleviate the expense through paying consumers for each unit of heat produced by low carbon technologies. Around 30% of the heating sector is allocated for industry use, making it the second largest cause of CO2 in this sector . Currently, combined heat and power (CHP) is the main process used to make industrial heat use more efficient and has shown CO2 reductions of up to 30%. Although this is a substantial reduction in CO2, alternative technology has the potential to deliver even higher reductions. For example, the process of carbon capture storage (CCS), has the potential to reduce CO2 emissions by up to 90% . However, CCS is a complex procedure which would require a substantial amount of funding and as a result is not currently implemented for industrial use in the UK.

Although heating is a significant contribution to CO2 emissions in the UK, there is also much needed progress elsewhere. In 2017 it was estimated that 34% of all carbon dioxide (CO2) emissions in the UK were caused by transport and is widely thought to be the sector in which least progress is being made, with only seeing a 2% reduction in CO2 emissions since 1990. Road transport contributes the highest proportion of emissions, more specifically petrol and diesel cars. Despite average CO2 emissions of new vehicles declining, the carbon footprint of the transport industry continues to increase due to the larger number of vehicles in the UK.

In terms of progress, CO2 emissions of new cars in 2017 were estimated to be 33.1% lower than the early 2000s. Although efficiencies are improving, more must be done if we are to conform to the targets set from the Climate Change Act 2008. A combination of decarbonising transport and implementing government legislation is vital to have the potential to meet these demands. New technology such as battery electric vehicles (BEV’s) have the potential to create significant reductions in the transport industry. As a result, a report from the committee of climate change suggests that 60% of all sales of new cars and vans should be ultra-low emission by 2030. However, the likeliness of achieving this is hindered by the constraints of new technologies. For instance, low emission vehicles are likely to have significantly higher costs and lack consumer awareness. This reinforces the need of government support in projecting new technologies and cleaner fuels. To support the development and uptake of low carbon vehicles the government has committed £32 million for the funding of charging infrastructure of BEV’s from 2015-2020 and a further £140 million has been allocated to the ‘low carbon vehicle innovation platform’ which strives to advance the development and research of low emission vehicles. Progress has also been made to make these vehicles more cost competitive through being exempt from taxes such as Vehicle Excise Duty and providing incentives such as plug in grants of up to £3,500. Aside from passenger cars, improvements are also being made to emissions of public transport. The average low emission bus in London could reduce its CO2 emissions by up to 26 tonnes per year subsequently acquiring the governments support in England of the ‘Green Bus Fund’.

Conclusion

In 2017, renewables accounted for a record 29.3% of the UK’s energy generation. This is a vast improvement on previous years and suggests the UK is on track to meet the new IPCC targets although a lot of work still needs to be done. Government policies do need to be reassessed in light of the new targets however. Scotland should reassess its nuclear policy as this might be a necessary stepping stone in reduced emissions until renewables are able to fully power the nation and the UK government needs to reassess its allocation of funding as investment in clean energy is on a current downward trajectory.

Although progress has been made to reduce CO2 emissions in the heat and transport sector, emissions throughout the UK remain much higher than desired. The committee of climate change report to parliament (2015), calls for the widespread electrification of heating and transport by 2030 to help prevent a 1.5 degree rise in global temperature. This is likely to pose as a major challenge and will require a significant increase in electricity generation capacities in conjunction with greater policy intervention to encourage the uptake of low carbon technologies. Although the likelihood of all consumers switching to alternative technologies are sparse, if the government continues to tighten regulations surrounding fossil fuelled technologies whilst the heat and transport industry continue to develop old and new systems to become more efficient this should see significant CO2 reductions in the future.

2018-11-19-1542623986

Is Nuclear Power a viable source of energy?: college application essay help

6th Form Economics project:

Nuclear power, the energy of the future of the 1950s, is now starting to feel like the past. Around 450 nuclear reactors worldwide currently generate 11% of the world electricity, or approximately 2500 TWh in a year, just under the total nuclear power generated globally in 2001 and only 500 TWh more than in 1991. The number of operating reactors worldwide has seen the same stagnation, with an increase of only 31 since 1989, or an annual growth of only 0.23% compared to 12.9% from 1959 to 1989. Most reactors, especially in Europe and North America, where built before the 90s and the average age of reactors worldwide is just over 28 years. Large scale nuclear accidents such as Chernobyl in 1986 or, much more recently, Fukushima in 2011 have negatively impacted public support for nuclear power and helped cause this decline, but the weight of evidence has increasingly suggested that nuclear is safer than most other energy sources and has an incredibly low carbon footprint, causing the argument against nuclear to shift from concerns about safety and the environment to questions about the economic viability of nuclear power. The crucial question that remains is therefore about how well nuclear power can compete against renewables to produce the low carbon energy required to tackle global warming.

The costs of most renewable energy sources have been falling rapidly and increasingly able to outcompete nuclear power as a low carbon option and even fossil fuels in some places; photovoltaic panels, for example, have halved in price from 2008 to 2014. Worse still for nuclear power, it seems that while costs of renewable energy have been falling, plans for new nuclear plants have been plagued with delays and additional costs: in the UK, Hinkley Point C power station is set to cost £20.3bn, making it the world’s most expensive power station, and significant issues in the design have raised questions as to whether the plant will be completed by 2025, it’s current goal. In France, the Flamanville 3 reactor is now predicted to cost three times its original budget and several delays have pushed the start up date, originally set for 2012, to 2020. The story is the same in the US, where delays and extra costs have plagued the construction of the Vogtle 3 and 4 reactors which are now due to be complete by 2020-21, 4 years over their original target. Nuclear power seemingly cannot deliver the cheap, carbon free energy it promised and is being outperformed by renewable energy sources such as solar and wind.

The crucial and recurring issue with nuclear power is that it requires huge upfront costs, especially when plants are built individually, and can only provide revenue years after the start of construction. This means that investment into nuclear is risky, long term and cannot be done well on a small scale, though new technologies such as SMRs (Small Modular Reactors) may change this in the coming decades, making it a much bigger gamble. Improvements in other technologies over the period of time a nuclear plant is built means that is often better for private firms, who are less likely to be able to afford large scale programs enabling significant cost reductions or a lower debt to equity ration in their capital structure, to invest in more easily scalable and shorter term energy sources, especially with subsidies favouring renewables in many developed countries. All of this points to the fundamental flaw of nuclear: that it requires going all the way. Small scale nuclear programs that are funded mostly with debt, that have high discount rates and low capacity factors as they are switched off frequently will invariably have a very high Levelised Cost of Energy (LCOE) as nuclear is so capital intensive.

That said, the reverse is true as well. Nuclear plants have very low operating costs, almost no external costs and the cost of decommissioning a plant are only a small portion of the initial capital cost, even with a low discount rate such as 3%, due to the long lifespan of a nuclear plant and the fact that many can be extended. Operating costs include fuel costs, which are extremely low for nuclear, costing only 0.0049 USD per kWh, and non-fuel operation and maintenance costs which are barely higher at 0.0137 USD per kWh. This includes waste disposal, a frequently cited political issue that has no longer been relevant technically for decades as waste can be reused relatively well and stored on site safely at very low costs simply because the quantity of fuel used and therefore waste produced is so small. The fuel, uranium is abundant and technology enabling uranium to be extracted from sea water would give access to a 60,000 year supply at present rates of consumption so costs from ‘resource depletion’ are also small. Finally, external costs represent a very small proportion of running costs: the highest estimates for health costs and potential accident are at 5€/MWh and 4€/MWh respectively, though some estimates fall to only 0.3€/MWh for potential accidents when past records are adjusted to try and factor in improvements in safety standards; though these vary significantly due to the fact that the total number of reactors is very small.

Nuclear power therefore remains still one of the cheapest ways to produce electricity in the right circumstances and many LCOE (Levelised Cost of Energy) estimates, which are designed to factor in all costs over the life time of a unit to give a more accurate representation of the costs of different types of energy, though they usually omit system costs, point to nuclear as a cheaper energy source than almost all renewables and most fossil fuels at low discount rates.

LCOE costs taken from ‘Projected Costs of Generating Electricity 2015 Edition’ and system costs taken from ‘Nuclear Energy and Renewables (NEA, 2012)’ have been combined by the World Nuclear association to give LCOE for four countries to compare the costs of nuclear to other energy sources. A discount rate of 7% is used, the study applies a $30/t CO2 price on fossil fuel use and uses 2013 US$ values and exchange rates. It is important to bear in mind that LCOE estimates vary widely as many assume different circumstances and they are very difficult to calculate, but it is clear from the graph that nuclear power is more than still viable; being the cheapest source in three of the four countries and third cheapest in the fourth behind onshore wind and gas.

2019-5-13-1557759917

Decision making during the Fukushima disaster

Introduction

On March 11, 2011 a tsunami struck the east coast of Japan, which resulted in a disaster at the Fukushima Daiichi nuclear power plant. During the day commencing the natural disaster many decisions were made with regards to managing the crisis. This paper will examine these decisions made during the crisis. The Governmental Politics Model, a model designed by Allison and Zelikow (1999), will be adopted to analyse the events. Therefore, the research question of this paper is: To what extent does the Governmental Politics Model explain the decisions made during the Fukushima disaster.

First, this paper will lay the theoretical basis for an analysis. The Governmental Politics Model and all crucial concepts within it are discussed. Then a conscription of the Fukushima case will follow. Since the reader is expected to already have general knowledge regarding the Fukushima Nuclear disaster the case description will be very brief. With the theoretical framework and case study a basis for the analysis is laid. The analysis will look into the decisions government and Tokyo Electric Power Company (TEPCO) officials made during the crisis.

Theory

Allison and Zelikow designed three theories to understand the outcomes of bureaucracies and decision making in the aftermath of the Cuban Missile Crisis in 1962. The first theory to be designed was the Rational Actor Model. This model focusses on the ‘logic of consequences’ and has a basic assumption of rational actions of a unitary actor. The second theory designed by Allison and Zelikow is the Organizational Behavioural Model. This model focusses on the ‘logic of appropriateness’ and has a main assumption of loosely connected allied organizations (Broekema, 2019).

The third model thought of by Allison and Zelikow is the Governmental Politics Model (GPM). This model reviews the importance of power in decision-making. According to the GPM decision making has not to do with rational/unitary actors or organizational output but everything with a bargaining game. This means that governments make decisions in other ways, according to the GPM there are four aspects to this. These aspects are: the choices of one, the results of minor games and of central games and foul-ups (Allison & Zelikow, 1999).

The following concepts are essential in the GPM. First, it is important to note that power in government is shared. Different institutions have independent bases and, therefore, power is shared. Second, persuasion is an important factor in the GPM. The power to persuade differentiates power from authority. Third, bargaining according to the process is identified, this means there is a structure in the bargaining processes. Fourth, power equals impact on outcome is mentioned in the Essence of Decision making. This means that there is a difference between what can be done and what is actually done, and what is actually done has to do with the power involved in the process. Lastly, intranational and international relations are of great importance to the GPM. These relations are intertwined and involve a vast set if international and domestic actors (Allison & Zelikow, 1999).

Not only the five previous concepts are relevant for the GPM. The GPM is inherently based on group decisions, in this type of decision making Allison and Zelikow identify seven factors. The first factor is a positive one, group decisions, when met by certain requirements create better decisions. Secondly, the agency problem is identified, this problem includes information asymmetric and the fact that actors are competing over different goals. Third, it is important to identify the actors in the ‘game’. This means that one has to find out who participates in the bargaining process. Fourth, problems with different types of decisions are outlined. Fifth, framing issues and agenda setting is an important factor in the GPM. Sixth, group decisions are not necessarily positive, they can lead to groupthink easily. This is a negative consequence and means that no other opinions are considered. Last, the difficulties in collective actions is outlined by Allison and Zelikow. This has to do with the fact that the GPM does not consider unitary actors but different organizations (Allison & Zelikow, 1999).

Besides the concepts mentioned above the GPM consists of a concise paradigm too. This paradigm is essential for the analysis of the Fukushima case. The paradigm exists of six main points. The first main point is the fact that decisions are the result of politics, this is the GPM and once again stresses the fact that decisions are the result of bargaining. Second, as said before, it is important to identify the players of the political ‘game’. Furthermore, one has to identify their preferences and goals and what kind of impact they can have on the final decision. Once this is analysed, one has to look at what the actual game is that is played. The action channels and rules of the game can be determined. Third, the ‘dominant inference pattern’ once again goes back to the fact that the decisions are the result of bargaining, but this point makes clear that differences and misunderstandings have to be taken into account. Fourth, Allison and Zelikow identify ‘general propositions’ this term includes all concepts examined in the second paragraph of the theory section of this paper. Fifth, specific propositions are looked at, these specify to decisions on the use of force and military action. Last, is the importance of evidence. When examining crisis decision making documented timelines and for example, minutes or other account are of great importance (Allison & Zelikow, 1999).

Case

In the definition of Prins and Van den Berg (2018) the Fukushima Daiichi disaster can be regarded as a safety case, this is because it was an unintentional event that caused harm to humans.

The crisis was initiated by an earthquake of 9.0 on the Richter scale which was followed by a tsunami, which waves reached a height of 10 meters. Due to the earthquake all external power lines, which are needed for cooling the fuel rods, were disconnected. Countermeasures for this issue were in place, however, the water walls were unable to protect the nuclear plant from flooding. This caused the countermeasures, the diesel generators, to be inadequate (Kushida, 2016).

Due to the lack of electricity, the nuclear fuel rods were not cooled, therefore, a ‘race for electricity’ started. Eventually the essential decision to inject sea water was made. Moreover, the situation inside the reactors was unknown. Meltdowns in reactors 1 and 2 already occurred. Because of explosions risks the decision to vent the reactors was made. However, hydrogen explosions materialized in reactors 1,2 and 4. This in turn led to the exposure of radiation to the environment. To counter the disperse of radiation the decision to inject sea water to the reactors was made (Kushida, 2016).

Analysis

This analysis will look into the decision or decisions to inject seawater in the damaged reactors. First, a timeline of the decisions will be outlined to further build on the case study above. Then the events and decisions made will be paralleled to the GPM paradigm with the six main points as described in the theory.

The need to inject sea water arose after the first stages as described in the case study passed. According to Kushida government officials and political leaders began voicing the necessity of injecting the water at 6:00 p.m., the day after the earthquake, on March 12. It would according to these officials have one very positive outcome, namely, the cooling of the reactors and the fuel pool. However, the use of sea water might have negative consequences too. It would ruin the reactors because of the salt in the sea water and it would produce vast amounts of contaminated water which would be hard to contain (Kushida, 2016). TEPCO experienced many difficulties with cooling the reactors, as is described in the case study, because of the lack of electricity. However, they were averse to injecting sea water into the reactors since this would ruin them. Still, after the first hydrogen explosion occurred in reactor one TEPCO plant workers started the injection of sea water in this specific reactor (Holt et al., 2012). A day later, on March 13, sea water injection started in reactor 3. On the 14th of March, seawater injection started in reactor 2 (Holt et al., 2012).

When looking at the decisions made by the government or TEPCO plant workers it is crucial to consider the chain of decision making by TEPCO leadership too. TEPCO leadership was in the first instance not very positive towards injecting seawater because of the earlier mentioned disadvantages, the plant would become unusable in the future and vast amounts of contaminated water would be created. Therefore, the government had to issue an order to TEPCO to start injecting seawater. They did so at 8:00 p.m. on 12 March. However, Yoshida, the Fukushima Daiichi Plant Manager already started injecting seawater at 7:00 p.m. (Kushida, 2016).

As one can already see different interests were at play and the outcome of the eventual decision can well be a political resultant. Therefore, it is crucial to examine the chain of decisions through the GPM paradigm. The first factor of this paradigm concerns decisions as a result of bargaining, this can clearly be seen in the decision to inject seawater. TEPCO leadership initially was not a proponent of this method, however, after government officials ordered them to execute the injection they had no choice. Second, according to the theory, it is important to identify the players of the ‘game’ and their goals. In this instance these divisions are easily identifiable, three different players can be pointed out. The different players are the government, TEPCO leadership and Yoshida, the plant manager. The Government has as a goal to keep their citizens safe during the crisis, TEPCO wanted to maintain the reactor as long as possible, whereas, Yoshida wanted to contain the crisis. This shows there were conflicting goals in that sense.

To further apply the GPM to the decision to inject seawater one can review the comprehensive ‘general proposition’. In this part miscommunication is a very relevant factor. Miscommunication was certainly a big issue in the decision to inject seawater. As said before Yoshida, already started injecting seawater before he received approval from his chiefs. One might even wonder whether or not there was a misunderstanding of the crisis by TEPCO leadership because of the fact that they hesitated to inject seawater necessary to cool the reactors. It can be argued that this hesitation constitutes a great deal of misunderstanding of the crisis since there was no plant to be saved anymore at the time the decision was made.

The fifth and sixth aspect of the GPM paradigm are less relevant to the decisions made. This is because ‘specific proposition’ refers to the use of force, which was not an option in dealing with the Fukushima crisis. The Japanese Self-Defence forces were dispatched to the plant; however, this was to provide electricity (Kushida, 2016). Furthermore, the sixth aspect, evidence is not as important in this case since many scholars, researchers and investigators have written to a great extent about what happened during the Fukushima crisis, more than sufficient information is available.

The political and bargaining game in the decision to inject seawater into the reactors is clearly visible. The different actors in the game had different goals, however, eventually the government won this game and the decision to inject seawater was made. Even before that the plant manager already to inject seawater because the situation was too dire.

Conclusion

This essay reviewed decision making during the Fukushima Daiichi Nuclear Power Plant disaster on the 11th of March 2011. More specifically the decision to inject seawater into the reactors to cool them was scrutinized. This was done by using the Governmental Politics Model. The decision to inject seawater into the reactors was a result of a bargaining game and different actors with different objectives played the decision-making ‘game’.

2019-3-18-1552918037

Tackling misinformation on social media: college essay help online

As the world of social media expands, the ratio of miscommunication rises as more organisations hop on the bandwagon of utilising the digital realm to their advantage. Twitter, Facebook, Instagram, online forums and other websites become the pinnacle of news gathering for many individuals. Information becomes easily accessible to all walks of life meaning that people are becoming more integrated about real life issues. Consumers absorb and take information in as easy as ever before which proves to be equally advantageous and disadvantageous. But, There is an evident boundary in which the differentiation of misleading and truthful information is hard to cross without research on the topic. The accuracy of public information is highly questionable which could easily lead to problems. Despite there being a debate about source credibility in any platform, there are ways to tackle the issue through “expertise/competence (i. e., the degree to which a perceiver believes a sender to know the truth), trustworthiness (i. e., the degree to which a perceiver believes a sender will tell the truth as he or she knows it), and goodwill”. (Cronkhite & Liska (1976)) Which is why it has become critical for this to be accurate, ethical and reliable for the consumers. Verifying information is important regardless of the type of social media outlet. This essay will be highlighting the importance of why information need to fit this criteria.

By putting out credible information it prevents and reduces misconception, convoluted meanings and inconsistent facts which reduce the likeliness of issues surfacing. This in turn saves time for the consumer and the producer. The presence of risk raises the issue of how much of this information should be consumed by the public. The perception of source credibility becomes an important concept to analyse within social media, especially in terms of crisis where rationality reduces and the latter often just take the first thing that is seen. With the increasing amount of information available through newer channels, the idea of releasing information from professionals of the topic devolve away from the producers and onto consumers. (Haas & Wearden, 2003) Many of the public is unaware that this information is prone to bias and selective information sharing which could communicate the actual facts much differently. One such example is the incident of Tokyo Electric Power Co.’s Fukushima No.1 nuclear power plant in 2011, where the plant experienced triple meltdowns. There is a misconception floating around that the food exported from Fukushima is too contaminated with radioactive substances making them unhealthy and unfit to eat. But the truth is that this isn’t the case when strict screening reveals that the contamination is below the government standard to pose a threat. (arkansa.gov.au) Since then, products shipped from Fukushima have dropped considerably in prices and have not recovered since 2011 forcing retailers into bankruptcy. (japantimes.co.jp) But thanks to the use of social media and organisations releasing information out into the public, Fukushima was able to raise funds and receive help from other countries. For example the U.S. sending $100,000 and China sending emergency supplies as assistance. (theguardian.com) This would have been impossible to achieve without the use of sharing credible, reliable and ethical information regarding the country and social media support spotlighting the incident.

Accurate, ethical and reliable information open the pathway for producers to secure a relationship with the consumers which can be used to strengthen their own businesses and expand their industries further whilst gaining support from the public. The idea is to have a healthy relationship without the air of uneasiness where monetary gains and social earnings increase. Social media playing a pivotal role in deciding the route the relationship falls in. But, When done incorrectly, organisations can become unsuccessful when they know little to nothing about the change of dynamics in consumers and behaviour in the digital landscape. Consumer informedness means that consumers are well informed about products or services available with precision influencing their willingness in decisions. This increase in consumer informedness can instigate change in consumer behaviour. (uni-osnabrueck.de) In the absence of accurate, ethical and reliable information, people and organisations will make terrible decisions with no hesitation. Which leads to losses and steps backwards. As Saul Eslake (Saul-Eslake.com) says, “they will be unable to help or persuade others to make better decisions; and no-one will be able to ascertain whether the decisions made by particular individuals or organisations were the best ones that could have been made at the time”. Recently, a YouTuber named Shawn Dawson made a video that sparked controversy to the company ‘Chuck E. Cheese’ for their pizzas slices that do not look like they belong to the whole pizza. He created a theory that part of the pizzas may have been reheated or recycled from other tables. In response Chuck E. Cheese responded in multiple media outlets to debunk the theory, “These claims are unequivocally false. We prep the dough daily for our made to order pizzas, which means they’re not always perfectly round, but they are still great tasting.” (https://twitter.com/chuckecheeses) It is worth bringing up that no information other than pictures back up the claim that they reused the pizza. The food company has also gone far to create a video showing the pizza preparation. To back as the support, ex-employees spoke up and shared their own side of the story to debunk the theory further. It’s these quick responses that saved what could have caused a small downfall in sale for the Chuck E. Cheese company. (washintonpost.com) This event highlights the importance on the release of information that can fall in favour to whoever utilises it correctly and the effectiveness of credible information that should be taken to heart. Credible information is good and bad especially when it has the support of others whether online or real life. The assumption or guess when there is no information available to base from is called a ‘heuristic value’ which is seen associated with information that has no credibility.

Mass media have been a dominant source of finding information (Murch, 1971). They are generally thought and assumed to provide credible, valuable, and ethical information open to the public (Heath, Liao, & Douglas, 1995). However, along with traditional forms of media, newer media are increasingly available for information seeking and reports. According to PNAS (www.pnas.org), “The emergence of social media as a key source of news content has created a new ecosystem for the spreading of misinformation. This is illustrated by the recent rise of an old form of misinformation: blatantly false news stories that are presented as if they are legitimate . So-called “fake news” rose to prominence as a major issue during the 2016 US presidential election and continues to draw significant attention.” This affects how we as social beings perceive and analyse information we see online compared to real life. Beyond just reducing the intervention’s effectiveness, failing to deduce stories from real to false increase the belief of false content. Leading to biased and misleading content that fool the audience. One such incident is Michael Jackson’s death in June 2009 where he died from acute propofol and benzodiazepine intoxication administered by his doctor, Dr. Murray. (nytimes.com) It was deduced from the public that Michael Jackson was murdered on purpose but the court convicted, Dr. Murray of involuntary murder as the doctor proclaimed that Jackson begged him to give more. A fact that was overlooked by the general due to bias. This underlines how information is selectively picked from the public and not all information is revealed to sway the audience. A study conducted online by Jason and his team (JCMC [CQU]) revealed that Facebook users tended to believe their friends almost instantly even without a link or proper citation to a website to backup their claim. “Using a person who has frequent social media interactions with the participant was intended to increase the external validity of the manipulation.” Meaning information online that can be taken as truth or not is left to the perception of the viewer linking to the idea that information online isn’t credible fully unless it came straight from the source. Proclaiming the importance of credible information to be released.

Information has the power to inform, explain and expand on topics and concepts. But it also has the power to create inaccuracies and confusion which is hurtful to the public and damages the reputation of companies. The goal is to move forward not backwards. Many companies have gotten themselves into disputes because of incorrect information which could have easily been avoided through releasing accurate, ethical and reliable information from the beginning. False Information can start disputes and true information can provide resolution. The public has become less attentive to mainstream news altogether which strikes a problem on what can be trusted. Companies and organisations need their information to be accurate and reliable as much as possible to defeat and reduce this issue. Increased negativity and incivility exacerbate the media’s credibility problem. “People of all political persuasions are growing more dissatisfied with the news, as levels of media trust decline.” (JCMC [CQU]) In 2010, Dannon’s ‘Activia Yogurt’ released an online statement and false advertisement that their yogurt had “special bacterial ingredients.” A consumer named, Trish Wiener lodged a complaint against Dannon. The yogurts were being marketed as being “clinically” and “scientifically” proven to boost the immune system while able to help to regulate digestion. However, the judge saw this statement as unproven. As well as many other products in their line that used this statement in their products. “This landed the company a $45 million class action settlement.” (businessinsider.com) it didn’t help that Dannon’s prices for their yogurt was inflated compared to other yogurts in the market. “The lawsuit claims Dannon has spent “far more than $100 million” to convey deceptive messages to U.S. consumers while charging 30 percent more that other yogurt products.” (reuters.com) This highlights how inaccurate information can cost millions of dollars to settle and resolve. However it also showed how the public can easily evict irresponsible producers from their actions and give leeway to justice.

2019-5-2-1556794982

Socio-political significance of Turkey’s emergent neo-Ottoman cultural phenomenon

Over the last decade, Turkey’s cultural sphere has witnessed a motto of Ottomania—a term describing the recent cultural fervor for everything Ottoman. Although this neo-Ottoman cultural phenomenon, is not entirely new since it had its previous cycle back in the 1980s and 1990s during the heyday of Turkey’s political Islam, it now has a rather novel characteristic and distinct pattern of operation. This revived Ottoman craze is discernable in what I call the neo-Ottoman cultural ensemble—referring to a growing array of Ottoman-themed cultural productions and sites that evoke Turkey’s Ottoman-Islamic cultural heritage. For example, the celebration of the 1453 Istanbul conquest no longer merely takes place as an annual public commemoration by the Islamists,[1] but has been widely promulgated, reproduced, and consumed into various forms of popular culture such as: the Panorama 1453 History Museum; a fun ride called the Conqueror’s Dream (Fatih’in Rüyası) at the Vialand theme park; the highly publicized and grossed blockbuster The Conquest 1453 (Fetih 1453); and the primetime television costume drama The Conqueror (Fatih). It is the “banal”, or “mundane,” ways of everyday practice of society itself, rather than the government or state institutions that distinguishes this emergent form of neo-Ottomanism from its earlier phases.[2]

This is the context in which the concept of neo-Ottomanism has acquired its cultural dimension and analytical currency for comprehending the proliferating neo-Ottoman cultural phenomenon. However, when the concept is employed in contemporary cultural debates, it generally follows two trajectories that are common in the literature of Turkish domestic and foreign politics. These trajectories conceptualize neo-Ottomanism as an Islamist political ideology and/or a doctrine of Turkey’s foreign policy in the post-Cold War era. This essay argues that these two conventional conceptions tend to overlook the complexity and hybridity of Turkey’s latest phase of neo-Ottomanism. As a result, they tend to understand the emergent neo-Ottoman cultural ensemble as merely a representational apparatus of the neoconservative Justice and Development Party’s (AKP; Adalet ve Kalkınma Partisi) ideology and diplomatic strategy.

This essay hence aims to reassess the analytical concept of neo-Ottomanism and the emergent neo-Ottoman cultural ensemble by undertaking three tasks. First, through a brief critique of the concept of neo-Ottomanism, I will discuss its common trajectories and limitations for comprehending the latest phase of neo-Ottoman cultural phenomenon. My second task is to propose a conceptual move from neo-Ottomanism to Ottomentality by incorporating the Foucauldian perspective of governmentality. Ottomentality is an alternative concept that I deployed here to underscore the overlapping relationship between neoliberal and neo-Ottoman rationalities in the AKP’s government of culture and diversity. I contend that neoliberalism and neo-Ottomanism are inseparable governing rationalities of the AKP and their convergence has engendered new modes of governing the cultural field as well as regulating inter-ethnic and inter-religious relations in Turkey. And finally, I will reassess the neo-Ottoman cultural ensemble through the analytical lens of Ottomentality. I contend that the convergence of neoliberal and neo-Ottoman rationalities has significantly transformed the relationships of state, culture, and the social. As the cases of the television historical drama Magnificent Century (Muhteşem Yüzyıl) and the film The Conquest 1453 (Fetih 1453) shall illustrate, the neo-Ottoman cultural ensemble plays a significant role as a governing technique that constitutes a new regime of truth based on market mentality and religious truth. It also produces a new subject of citizenry, who is responsible for enacting its right to freedom through participation in the culture market, complying with religious norms and traditional values, and maintaining a difference-blind and discriminatory model of multiculturalism.

A critique of neo-Ottomanism as an analytical concept

Although the concept of neo-Ottomanism has been commonly used in Turkish Studies, it has become a loose term referring to anything associated with the Islamist political ideology, nostalgia for the Ottoman past, and imperialist ambition of reasserting Turkey’s economic and political influence within the region and beyond. Some scholars have recently indicated that the concept of neo-Ottomanism is running out of steam as it lacks meaningful definition and explanatory power in studies of Turkish politics and foreign policy.[3] The concept’s ambiguity and impotent analytical and explanatory value is mainly due to the divergent, competing interpretations and a lack of critical evaluation within the literature.[4] Nonetheless, despite the concept being equivocally defined, it is most commonly understood in two identifiable trajectories. First, it is conceptualized as an Islamist ideology, responding to the secularist notions of modernity and nationhood and aiming to reconstruct Turkish identity by evoking Ottoman-Islamic heritage as an essential component of Turkish culture. Although neo-Ottomanism was initially formulated by a collaborated group of secular, liberal, and conservative intellectuals and political actors in the 1980s, it is closely linked to the consolidated socio-economic and political power of conservative middle-class. This trajectory considers neo-Ottomanism as primarily a form of identity politics and a result of political struggle in opposition to the republic’s founding ideology of Kemalism. Second, it is understood as an established foreign policy framework reflecting the AKP government’s renewed diplomatic strategy in the Balkans, Central Asia, and Middle East wherein Turkey plays an active role. This trajectory regards neo-Ottomanism as a political doctrine (often referring to Ahmet Davutoglu’s Strategic Depth serving as the guidebook for Turkey’s diplomatic strategy in the 21st century), which sees Turkey as a “legitimate heir of the Ottoman Empire”[5] and seeks to reaffirm Turkey’s position in the changing world order in the post-Cold War era.[6]

As a result of a lack of critical evaluation of the conventional conceptions of neo-Ottomanism, contemporary cultural analyses have largely followed the “ideology” and “foreign policy” trajectories as explanatory guidance when assessing the emergent neo-Ottoman cultural phenomenon. I contend that the neo-Ottoman cultural phenomenon is more complex than what these two trajectories offer to explain. Analyses that adopt these two approaches tend to run a few risks. First, they tend to perceiveneo-Ottomanism as a monolithic imposition upon society. They presume that this ideology, when inscribed onto domestic and foreign policies, somehow has a direct impact on how society renews its national interest and identity.[7] And they tend to understand the neo-Ottoman cultural ensemble as merely a representational device of the neo-Ottomanist ideology. For instance, Şeyda Barlas Bozkuş, in her analyses of the Miniatürk theme park and the 1453 Panorama History Museum, argues that these two sites represent the AKP’s “ideological emphasis on neo-Ottomanism” and “[create] a new class of citizens with a new relationship to Turkish-Ottoman national identity.”[8] Second, contemporary cultural debates tend to overlook the complex and hybrid nature of the latest phase of neo-Ottomanism, which rarely operates on its own, but more often relies on and converges with other political rationalities, projects, and programs. As this essay shall illustrate, when closely examined, current configuration of neo-Ottomanism is more likely to reveal internal inconsistencies as well as a combination of multiple and intersecting political forces.

Moreover, as a consequence of the two risks mentioned above, contemporary cultural debates may have overlooked some of the symptomatic clues, hence, underestimated the socio-political significance of the latest phase of neo-Ottomanism. A major symptomatic clue that is often missed in cultural debates on the subject is culture itself. Insufficient attention has been paid to the AKP’s rationale of reconceptualizing culture as an administrative matter—a matter that concerns how culture is to be perceived and managed, by what culture the social should be governed, and how individuals might govern themselves with culture. At the core of the AKP government’s politics of culture and neoliberal reform of the cultural filed is the question of the social.[9] Its reform policies, projects, and programs are a means of constituting a social reality and directing social actions. When culture is aligned with neoliberal governing rationality, it redefines a new administrative culture and new rules and responsibilities of citizens in cultural practices. Culture has become not only a means to advance Turkey in global competition,[10] but also a technology of managing the diversifying culture resulted in the process of globalization. As Brian Silverstein notes, “[culture] is among other things and increasingly to be seen as a major target of administration and government in a liberalizing polity, and less a phenomenon in its ownright.”[11] While many studies acknowledge the AKP government’s neoliberal reform of the cultural field, they tend to regard neo-Ottomanism as primarily an Islamist political agenda operating outside of the neoliberal reform. It is my conviction that neoliberalism and neo-Ottomanism are inseparable political processes and rationalities, which have merged and engendered new modalities of governing every aspect of cultural life in society, including minority cultural rights, freedom of expression, individuals’ lifestyle, and so on. Hence, by overlooking the “centrality of culture”[12] in relation to the question of the social, contemporary cultural debates tend to oversimplify the emergent neo-Ottoman cultural ensemble as nothing more than an ideological machinery of the neoconservative elite.

From neo-Ottomanism to Ottomentality

In order to more adequately assess the socio-political significance of Turkey’s emergent neo-Ottoman cultural phenomenon, I propose a conceptual shift from neo-Ottomanism to Ottomentality. This shift involves not only rethinking neo-Ottomanism as a form of governmentality, but also thinking neoliberal and neo-Ottoman rationalities in collaborative terms. Neo-Ottomanism is understood here as Turkey’s current form of neoconservatism, a prevalent political rationality that its governmental practices are not solely based on Islamic values, but also draws from and produces a new political culture that considers Ottoman-Islamic toleration and pluralism as the foundation of modern liberal multiculturalism in Turkey. Neoliberalism, in the same vein, far from a totalizing concept describing an established set of political ideology or economic policy, is conceived here as a historically and locally specific form of governmentality that must be analyzed by taking into account the multiple political forces which gave its unique shape in Turkey.[13] My claim is that when these two rationalities merge at the cultural domain, they engender a new art of government, which I call the government of culture and diversity.

This approach is therefore less concerned with a particular political ideology or the question of “how to govern,” but more about the “different styles of thought, their conditions of formation, the principles and knowledges that they borrow from and generate, the practices they consist of, how they are carried out, their contestations and alliances with other arts of governing.”[14] In light of this view, and for a practical purpose, Ottomentality is an alternative concept that I attempt to develop here to avoid the ambiguous meanings and analytical limitations of neo-Ottomanism. This concept underscores to the convergence of neoliberal and neo-Ottoman rationalities as well as the interrelated discourses, projects, policies, and strategies that are developed around them for regulating cultural activities and directing inter-ethnic and inter-religious relations in Turkey. It pays attention to the techniques and practices that have significant effects on the relationships of state, culture, and the social. It is concerned with the production of knowledge, or truth, based on which a new social reality of ‘freedom,’ ‘tolerance,’ and ‘multiculturalism’ in Turkey is constituted. Furthermore, it helps to identify the type of political subject, whose demand for cultural rights and participatory democracy is reduced to market terms and a narrow understanding of multiculturalism. And their criticism of this new social reality is increasingly subjected to judicial exclusion and discipline.

I shall note that Ottomentality is an authoritarian type of governmentality—a specific type of illiberal rule operated within the structure of modern liberal democracy. As Mitchell Dean notes, although the literature on governmentality has focused mainly on liberal democratic rules that are practiced through the individual subjects’ active role (as citizens) and exercise of freedom, there are also “non-liberal and explicitly authoritarian types of rule that seek to operate through obedient rather than free subjects, or, at a minimum, endeavor to neutralize any opposition to authority.”[15] He suggests that a useful way to approach to this type of governmentality would be to identify the practices and rationalities which “divide” or “exclude” those who are subjected to be governed.[16] According to Foucault’s notion of “dividing practices,” “[t]he subject is either divided inside himself or divided from others. This process objectivizes him. Examples are the mad and the sane, the sick and the healthy, the criminals and the ‘good boys’.”[17] Turkey’s growing neo-Ottoman cultural ensemble can be considered as such exclusionary practices, which seek to regulate the diversifying culture by dividing the subjects into categorical, if not polarized, segments based on their cultural differences. For instance, mundane practices such as going to the museums and watching television shows may produce subject positions which divide subjects into such categories as the pious and the secular, the moral and the degenerate, and the Sunni-Muslim-Turk and the ethno-religious minorities.

Reassessing the neo-Ottoman cultural ensemble through the lens of Ottomentality

In this final section, I propose a reassessment of the emergent neo-Ottoman cultural ensemble by looking beyond the conventional conceptions of neo-Ottomanism as “ideology” and “foreign policy.” Using the analytical concept of Ottomentality, I aim to examine the state’s changing role and governing rationality in culture, the discursive processes of knowledge production for rationalizing certain practices of government, and the techniques of constituting a particular type of citizenry who acts upon themselves in accordance with the established knowledge/truth. Nonetheless, before proceeding to an analysis of the government of culture and diversity, a brief overview of the larger context in which the AKP’s Ottomentality took shape would be helpful.

Context

Since the establishment of the Turkish republic, the state has played a major role in maintaining a homogeneous national identity by suppressing public claims of ethnic and religious differences through militaristic intervention. The state’s strict control of cultural life in society, in particular its assertive secularist approach to religion and ethnic conception of Turkish citizenship, has resulted in unsettling tensions between ethno-religious groups in the 1980s and 1990s, i.e. the Kurdish question and the 1997 “soft coup.” These social tensions indicated the limits of state-led modernization and secularization projects in accommodating ethnic and pious segments of society.[18] This was also a time when Turkey began to witness the declining authority of the founding ideology of Kemalism as an effect of economic and political liberalization. When the AKP came to power in 2002, one of the most urgent political questions was thus the “the limits of what the state can—or ought for its own good—reasonably demand of citizens […] to continue to make everyone internalize an ethnic conception of Turkishness.”[19] At this political juncture, it was clear that a more inclusive socio-political framework was necessary in order to mitigate the growing tension resulted in identity claims.

Apart from domestic affairs, a few vital transnational initiatives also took part in the AKP’s formulation of neoliberal and neo-Ottoman rationalities. First, in the aftermath of the attacks in New York on September 11 (9/11) in 2001, the Middle East and Muslim communities around the world became the target ofintensified political debates. In the midst of anti-Muslim and anti-terror propaganda, Turkey felt a need to rebuild its image by aligning with the United Nations’ (UN) resolution of “The Alliance of Civilizations,” which called for cross-cultural dialogue between countries through cultural exchange programs and transnational business partnership.[20] Turkey took on the leading role in this resolution and launched extensive developmental plans that were designated to rebuild Turkey’s image as a civilization of tolerance and peaceful co-existence.[21] The Ottoman-Islamic civilization, known for its legacy of cosmopolitanism and ethno-religious toleration, hence became an ideal trademark of Turkey for the project of “alliance of civilizations.”[22]

Second, Turkey’s accelerated EU negotiation between the late 1990s and mid 2000s provided a timely opportunity for the newly elected AKP government to launch “liberal-democratic reform,”[23] which would significantly transform the way culture was to be administered. Culture, among the prioritized areas of administrative reform, was now reorganized to comply with the EU integration plan. By incorporating the EU’s aspect of culture as a way of enhancing “freedom, democracy, solidarity and respect for diversity,”[24] the AKP-led national cultural policy would shift away from the state-centered, protectionist model of the Kemalist establishment towards one that highlights “principles of mutual tolerance, cultural variety, equality and opposition to discrimination.”[25]

Finally, the selection of Istanbul as 2010 European Capital of Culture (ECoC) is particularly worth noting as this event enabled local authorities to put into practice the neoliberal and neo-Ottoman governing rationalities through extensive urbanprojects and branding techniques. By sponsoring and showcasing different European cities each year, the ECoC program aims at promoting a multicultural European identity beyond national borders.[26] The 2010 Istanbul ECoC was an important opportunity for Turkey not only to promote its EU candidacy, but also for the local governments to pursue urban developmental projects.[27] Some of the newly formed Ottoman-themed cultural sites and productions were a part of the ECoC projects for branding Istanbul as cultural hub where the East and West meet. It is in this context that the interplay between the neoliberal and neo-Ottoman rationalities can be vividly observed in the form of neo-Ottoman cultural ensemble.

Strong state, culture, and the social

Given the contextual background mentioned above, one could argue that the AKP’s neoliberal and neo-Ottoman rationalities arose as critiques of the republican state’s excessive intervention in society’s cultural life. The transnational initiatives that required Turkey to adopt a liberal democratic paradigm have therefore given way to the formulation and convergence of these two forms of governmentalities that would significantly challenge the state-centered approach to culture as a means of governing the social. However, it would be inaccurate to claim that the AKP’s prioritization of private initiatives in cultural governance has effectively decentralized or democratized the cultural domain from the state’s authoritarian intervention and narrow definition of Turkish culture. Deregulation of culture entails sophisticated legislations concerning the roles of the state and civil society in cultural governance. Hence, for instance, the law of promotion of culture, the law of media censorship, and the new national cultural policy prepared by the Ministry of Culture and Tourism explicitly indicate not only a new vision of national culture, but also the roles of the state and civil society in promoting and preserving national culture. It shall be noted that culture as a governing technology is not an invention of the AKP government. Culture has always been a major area of administrative concern throughout the history of the Turkish republic. As Murat Katoğlu illustrates, during the early republic, culture was conceptualized as part of the state-led “public service” aimed to inform and educate the citizens.[28] Arts and culture were essential means for modernizing the nation; for instance,the state-run cultural institutions, i.e. state ballet, theater, museum, radio and television, “[indicate] the type of modern life style that the government was trying to advocate.”[29] Nonetheless, the role of the state, the status of culture, and the techniques of managing it have been transformed as Turkey undergoes neoliberal reform. In addition, Aksoy suggests that what distinguishes the AKP’s neoliberal mode of cultural governance from that of the early republic modernization project is that market mentality has become the administrative norm.[30] Culture now is reconceptualized as an asset for advancing Turkey in global competition and a site for exercising individual freedom rather than a mechanism of social engineering. And Turkey’s heritage of Ottoman-Islamic civilization in particular is utilized as a nation branding technique to enhance Turkey’s economy, rather than a corrupt past to be forgotten. To achieve the aim of efficient, hence good, governance, the AKP’s cultural governance has heavily relied on privatization as a means to limit state intervention. Thus, privatization has not only transformed culture into an integral part of the free market, but also redefined the state’s role as a facilitator of the culture market, rather than the main provider of cultural service to the public.

The state’s withdrawal from cultural service and prioritization of the civil society to take on the initiatives of preserving and promoting Turkish “cultural values and traditional arts”[31] lead to an immediate effect of the declining authority of the Kemalist cultural establishment. Since many of the previously state-run cultural institutions now are managed with corporate mentality, they begin to lose their status as state-centered institutions and significance in defining and maintaining a homogeneous Turkish culture that they once did. Instead, these institutions, together with other newly formed cultural sites and productions by private initiatives, are converted into a market place or cultural commodities in competition with each other. Hence, privatization of culture leads to the following consequences: First, it weakens and hollows out the 20th century notion of modern secular nation state, which sets a clear boundary confining religion within the private sphere. Second, it gives way to the neoconservative force, who “models state authority on [religious] authority, a pastoral relation of the state to its flock, and a concern with unified rather than balanced or checked state power.”[32] Finally, it converts social issues that are resulted from political actions into market terms and a sheer matter of culture, which is now left to personal choice.[33] As a result, far from a declining state, Ottomentality has constituted a strong state. In particular, neoliberal governance of the cultural field has enabled the ruling neoconservative government to mobilize a new set of political truth and norms for directing inter-ethnic and inter-religious relations in society.

New regime of truth

Central to Foucault’s notion of governmentality is “truth games”[34]—referring to the activities of knowledge production through which particular thoughts are rendered truthful and practices of government are made reasonable.[35] What Foucault calls the “regime of truth” is not concerned about facticity, but a coherent set of practices that connect different discourses and make sense of the political rationalities marking the “division between true and false.”[36] The neo-Ottoman cultural ensemble is a compelling case through which the AKP’s investment of thought, knowledge production, and truth telling can be observed. Two cases are particularly worth mentioning here as I work through the politics of truth in the AKP’s neoliberal governance of culture and neo-Ottoman management of diversity.

Between 2011 and 2014, the Turkish television historical drama Magnificent Century (Muhteşem Yüzyıl, Muhteşem hereafter), featuring the life of the Ottoman Sultan Süleyman, who is known for his legislative establishment in the 16th century Ottoman Empire, attracted wide viewership in Turkey and abroad, especially in the Balkans and Middle East. Although the show played a significant role in generating international interests in Turkey’s tourism, culinary, Ottoman-Islamicarts and history, etc. (which are the fundamental aims of the AKP-led national cultural policy to promote Turkey through arts and culture, including media export),[37] it received harsh criticism among some Ottoman(ist) historians and warning from the RTUK (Radio and Television Supreme Council, a key institution of media censorship and regulation in Turkey). The criticism included the show’s misrepresentation of the Sultan as a hedonist and its harm to moral and traditional values of society. Oktay Saral, an AKP deputy of Istanbul at the time, petitioned to the parliament for a law to ban the show. He said, “[The] law would […] show filmmakers [media practitioners] how to conduct their work in compliance with Turkish family structure and moral values without humiliating Turkish youth and children.”[38] Recep Tayyip Erdoğan (Prime Minister then) also stated, “[those] who toy with these [traditional] values would be taught a lesson within the premises of law.”[39] After his statement, the show was removed from in-flight-channels of national flag carrier Turkish Airlines.

Another popular media production, the 2012 blockbuster The Conquest 1453 (Fetih 1453, Fetih hereafter), which was acclaimed for its success in domestic and international box offices, also generated mixed receptions among Turkish and foreign audiences. Some critics in Turkey and European Christians criticized the film for its selective interpretation of the Ottoman conquest of Constantinople and offensive portrayal of the (Byzantine) Christians. The Greek weekly To Proto Thema denounced that the film served as a “conquest propaganda by the Turks” and “[failed] to show the mass killings of Greeks and the plunder of the land by the Turks.”[40] A Turkish critic also commented that the film portrays the “extreme patriotism” in Turkey “without any hint of […] tolerance sprinkled throughout [the film].”[41] Furthermore, a German Christian association campaigned to boycott the film. Meanwhile, the AKP officials on the contrary praised the film for its genuine representation of the conquest. As Bülent Arınç (Deputy Prime Minister then) stated, “This is truly the best film ever made in the past years.”[42] He also responded to the questions regarding the film’s historical accuracy, “This is a film, not a documentary. The film in general fairly represents all the events that occurred during the conquest as the way we know it.”[43]

When Muhteşem and Fetih are examined within the larger context in which the neo-Ottoman cultural ensemble is formed, the connections between particular types of knowledge and governmental practice become apparent. First, the cases of Muhteşem and Fetih reveal the saturation of market rationality as the basis for a new model of cultural governance. When culture is administered in market terms, it becomes a commodity for sale and promotion as well as an indicator of a number of things for measuring the performance of cultural governance. When Turkey’s culture, in particular Ottoman-Islamic cultural heritage, is converted into an asset and national brand to advance the country in global competition, the reputation and capital it generates become indicators of Turkey’s economic development and progress. The overt emphasis on economic growth, according to Irving Kristol, is one of the distinctive features that differentiate the neoconservatives from their conservative predecessors. He suggests that, for the neoconservatives, economic growth is what gives “modern democracies their legitimacy and durability.”[44] In the Turkish context, the rising neoconservative power, which consisted of a group of Islamists and secular, liberal intellectuals and entrepreneurs (at least in the early years of the AKP’s rule), had consistently focused on boosting Turkey’s economy. For them, economic development seems to have become the appropriate way of making “conservative politics suitable to governing a modern democracy.”[45] Henceforth, such high profile cultural productions as Muhteşem and Fetih are of valuable assets that serve the primary aim of the AKP-led cultural policy because they contribute to the growth in the related areas of tourism and culture industry by promoting Turkey at international level. Based on market rationality, as long as culture can generate productivity and profit, the government is doing a splendid job in governance. In other words, when neoliberal and neoconservative forces converge at the cultural domain, both culture and good governance are reduced to and measured by economic growth, which has become a synonym for democracy “equated with the existence of formal rights, especially private property rights; with the market; and with voting,” rather than political autonomy.[46]

Second, the AKP officials’ applause of Fetih on the one hand and criticism of Muhteşem on the other demonstrates their assertion of the moral-religious authority of the state. As the notion of nation state sovereignty has become weakened by the processes of economic liberalization and globalization, the boundary that separates religion and state has become blurred. As a result, religion becomes “de-privatized” and surges back into the public sphere.[47] This blurred boundary between religion and state has enabled the neoconservative AKP to establish links between religious authority and state authority as well as between religious truth and political truth.[48] These links are evident in the AKP officials’ various public statements declaring the government’s moral mission of sanitizing Turkish culture in accordance with Islamic and traditional values. For instance, as Erdoğan once reacted to his secular opponent’s comment about his interference in politics with religious views, “we [AKP] will raise a generation that is conservative and democratic and embraces the values and historical principles of its nation.”[49] According to his view, despite Muhteşem’s contribution of generating growth in industries of culture and tourism, it became subjected to censorship and legal action because its content did not comply with the governing authority’s moral mission. The controversy of Muhteşem illustrates the rise of a religion-based political truth in Turkey, which sees Islam as the main reference for directing society’s moral conduct and individual lifestyle. Henceforth, by rewarding desirable actions (i.e. with sponsorship law and tax incentives)[50] and punishing undesirable ones (i.e. through censorship, media ban, and jail term for media practitioners’ misconduct), the AKP-led reform of the cultural field constitutes a new type of political culture and truth—one that is based on moral-religious views rather than rational reasoning.

Moreover, the AKP officials’ support for Fetih reveals its endeavor in a neo-Ottomanist knowledge, which regards the 1453 Ottoman conquest of Constantinople as the foundation of modern liberal multiculturalism in Turkey. This knowledge perceives Islam as the centripetal force for enhancing social cohesion by transcending differences between faith and ethnic groups. It rejects candid and critical interpretations of history and insists on a singular view of Ottoman-Islamic pluralism and a pragmatic understanding of the relationship between religion and state.[51] It does not require historical accuracy since religious truth is cast as historical and political truth. For instance, a consistent, singular narrative of the conquest can be observed in such productions and sites as the Panorama 1453 History Museum, television series Fatih, and TRT children’s program Çınar. This narrative begins with Prophet Muhammad’s prophecy, which he received from the almighty Allah, that Constantinople would be conquered by a great Ottoman soldier. When history is narrated from a religious point of view, it becomes indisputable as it would imply challenge to religious truth, hence Allah’s will. Nonetheless, the neo-Ottomanist knowledge conceives the conquest as not only an Ottoman victory in the past, but an incontestable living truth in Turkey’s present. As Nevzat Bayhan, former general manager of Culture Inc. in association with the Istanbul Metropolitan Municipality (İBB Kültür A.Ş.), stated at the opening ceremony of Istanbul’s Panorama 1453 History Museum,

The conquest [of Istanbul] is not about taking over the city… but to make the city livable… and its populace happy. Today, Istanbul continues to present to the world as a place where Armenians, Syriacs, Kurds… Muslims, Jews, and Christians peacefully live together.[52]

Bayhan’s statement illustrates the significance of the 1453 conquest in the neo-Ottomanist knowledge because it marks the foundation of a culture of tolerance, diversity, and peaceful coexistence in Turkey. While the neo-Ottomanist knowledge may conveniently serve the branding purpose in the post-9/11 and ECoC contexts, I maintain that it more significantly rationalizes the governmental practices in reshaping the cultural conduct and multicultural relations in Turkey. The knowledge also produces a political norm of indifference—one that is reluctant to recognize ethno-religious differences among populace, uncritical of the limits of Islam-based toleration and multiculturalism, and more seriously, indifferent about state-sanctioned discrimination and violence against the ethno-religious minorities.

Ottomentality and its subject

The AKP’s practices of the government of culture and diversity constitute what Foucault calls the “technologies of the self—ways in which human beings come to understand and act upon themselves within certain regimes of authority and knowledge, and by means of certain techniques directed to self-improvement.”[53] The AKP’s neoliberal and neo-Ottoman rationalities share a similar aim as they both seek to produce a new set of ethnical code of social conduct and transform Turkish society into a particular kind, which is economically liberal and culturally conservative. They deploy different means to direct the governed in certain ways as to achieve the desired outcome. According to Foucault, the neoliberal style of government is based on the premise that “individuals should conduct their lives as an enterprise [and] should become entrepreneurs of themselves.”[54] Central to this style of government is the production of freedom—referring to the practices that are employed to produce the necessary condition for the individuals to be free and take on responsibility of caring for themselves. For instance, Nikolas Rose suggests that consumption, a form of governing technology, is often deployed to provide the individuals with a variety of choice for exercising freedom and self-improvement. As such, the subject citizens are now “active,” or “consumer” citizens, who understand their relationship with the others and conduct their life based on market mentality.[55] Unlike the republican citizens, whose rights, duties, and obligations areprimarily bond to the state, citizens as consumers “[are] to enact [their] democratic obligations as a form of consumption”[56] in the private sphere of the market.

The AKP’s neoliberal governance of culture hence has invested in liberalizing the cultural field by transforming it into a marketplace in order to create such a condition wherein citizens can enact their right to freedom and act upon themselves as a form of investment. The proliferation of the neo-Ottoman cultural ensemble in this regard can be understood as a new technology of the self as it creates a whole new field for the consumer citizens to exercise their freedom of choice (of identity, taste, and lifestyle) by providing them a variety of trendy Ottoman-themed cultural products, ranging from fashion to entertainment. This ensemble also constitutes a whole new imagery of the Ottoman legacy with which the consumer citizens may identify. Therefore, through participation within the cultural field, as artists, media practitioners, intellectuals, sponsors, or consumers, citizens are encouraged to think of themselves as free agents and their actions are a means for acquiring the necessary cultural capital to become cultivated and competent actors in the competitive market. This new technology of the self also has transformed the republican notion of Turkish citizenship to one that is activated upon individuals’ freedom of choice through cultural consumption at the marketplace.

Furthermore, as market mechanisms enhance the promulgation of moral-religious values, the consumer citizens are also offered a choice of identity as virtuous citizens, who should conduct their life and their relationship with the others based on Islamic traditions and values. Again, the public debate over the portrayal of the revered Sultan Süleyman as a hedonist in Muhteşem and the legal actions against the television producer, are exemplary of the disciplinary techniques for shaping individuals’ behaviors in line with conservative values. While consumer citizens exercise their freedom through cultural consumption, they are also reminded of their responsibility to preserve traditional moral value, family structure, and gender relations. Those who deviate from the norm are subjected to public condemnation and punishment.

Finally, as the neo-Ottomanist cultural ensemble reproduces and mediates a neo-Ottomanist knowledge in such commodities as the film Fetih and Panorama 1453 History Museum, consumer citizens are exposed to a new set of symbolic meanings of Ottoman-Islamic toleration, pluralism, and peaceful coexistence, albeit through a view of the Ottoman past fixated on its magnificence rather than its monstrosity.[57] This knowledge sets the ethical code for private citizens to think of themselves in relation to the other ethno-religious groups based on a hierarchical social order, which subordinates minorities to the rule of Sunni Islamic government. When this imagery of magnificence serves as the central component in nation branding, such as to align Turkey with the civilization of peace and co-existence in the post 9/11 and ECoC contexts, it encourages citizens to take pride and identify with their Ottoman-Islamic heritage. As such, Turkey’s nation branding perhaps also can be considered as a noveltechnology of the self as it requires citizens, be it business sectors, historians, or filmmakers, to take on their active role in building an image of tolerant and multicultural Turkey through arts and culture. It is in this regard that I consider the neo-Ottoman rationality as a form of “indirect rule of diversity”[58] as it produces a citizenry, who actively participates in the reproduction of neo-Ottomanist historiography and continues to remain uncritical about the “dark legacy of the Ottoman past.”[59] Consequently, Ottomentality has produced a type of subject that is constantly subjected to dividing techniques “that will divide populations and exclude certain categories from the status of the autonomous and rational person.”[60]

2016-10-5-1475705338