Is The First Amendment Clear?

The First Amendment states, “Congress shall make no law respecting an establishment of religion.” Supreme Court Justice Hugo Black believed that the First Amendment requires the state to stay neutral in its relationship with religion. I disagree with Justice Black’s interpretation of the First Amendment. The First Amendment states that Congress can make no law establishing a religion. It does not say that the state may not favor a religion or it’s general philosophy. We, after all, have history of Christianity in the beliefs and practices of today. It does not say that the government cannot take part in the celebration of holidays or prayers, which is the current interpretation in our government. However, it would still be the state’s job to make sure that all citizens, especially citizens of all religions, would be treated equally. When government officials convey themselves with religious expressions or symbols, it is sometimes seen as alliance between church and state. In actuality, it is called freedom of expression. This type of favoring, if you consider it such, of religion should be allowed.

At a first look, the First Amendment appears to be written in clear terms, saying Congress shall make no law in violation of certain religious and political principles. After a closer reading, and upon more reflection, the amendment’s underlying issues rise to the surface in the form of many many arguments between political parties for centuries. What kind of law respects the establishment of religion? Does the First Amendment include only laws that would establish an official national religion, as the Anglican Church was established in England? Does it also include laws that recognize or endorse religious activities such as the celebration of Christmas? Can people even agree on what is meant by religion so that judges may know when religion is being established or when the right to its free exercise has been infringed? These questions have been a subject of gigantic controversy in the USA, moreso as time goes on.

Hugo Black served on the U.S. Supreme Court for 34 years and is considered to be one of the most influential justices of all time, even though his background and path to the Court might have made less of an impact. Roosevelt eventually nominated Black to be a supreme court justice. And when he finally made it to the Senate, Black was confirmed by a vote of 63 to 16. However, shortly after, the public learned about Black’s past as a member of the Ku Klux Klan, a violent racist organization. On the eve of taking his seat on the Supreme Court, Black went public, saying that membership in the KKK was a necessity to enter politics in the South and they assisted him in his political campaigns. Black supported a strict separation of religion and state and wrote some of very influential decisions in the area of the establishment clause, for example, Everson v. Board of Education, which incorporated the establishment clause to the states, and in Engel v. Vitale, which did not support teacher-led prayer in the public school classroom.

In the past, the state has used its power to force religions. Torcaso v. Watkins in 1961, was a famous Supreme Court Case in which, the Constitution of Maryland required a declaration of belief in the existence of God in order for a person to hold “any office of profit or trust in this State”, however, Torcaso, an atheist, refused, and his appointment was consequently revoked. Torcaso, believing his constitutional rights to freedom of religious expression had been infringed filed a lawsuit against maryland. The court ruled that declared religious tests for public office candidates as unconstitutional. This ruling was justifiable. A public office should not be limited to a person of a specific religion. Public officials represent everyone in their community, not a just a specific religious interest. In the famous case of Moore v. Glassroth, Alabama Chief Justice Roy Moore was charged with judicial ethics charges because he refused to remove the Ten Commandments monuments from a government building after being ordered to. The monument was eventually removed. I don’t agree with the ruling of this case. The Framers built their belief on Judeo-Christian philosophy. This is shown also in the fact that “Moses” and the “Ten Commandments” are depicted on the wall behind the Justices and Chief Justice in the Supreme Court. The monument wasn’t hurting anyone. It simply stated rules of morality that are believed by many cultures and many believers of different religions.

Government officials are under the rule of law; therefore they must follow the same regulations as anyone else in society. Corresponding to the rule of law, officials must also have the same rights as all other citizens. Government officials must have the right to express their beliefs and philosophies in seeking the common good. There have also been times that state power has been used to favor religions and their associations In the Supreme Court Case of Bowen v. Kendrick in 1988, a group of federal taxpayers, clergymen, and the American Jewish Congress filed suit against Otis R. Bowen, the Secretary of Health and Human Services, arguing that the Adolescent Family Life Act violated the Establishment Clause of the First Amendment. The Court allowed federal funds to support religious organizations offering counseling. This was because of the new Adolescent Family Life Act that had been put into effect. This decision was justifiable because it was not going to the organization to further the religion, but to help the community.

More recently, President Bush has created the Faith-based and Community Initiative Program. This allows faith-based institutions to vie, equally, for federal funds, and pushes for Identifying and eliminating barriers that impede the full participation of people in need in the Federal grants process, ensuring that Federally-funded social services administered by state and local governments are consistent with equal treatment provisions, and pursuing legislative efforts to extend charitable choice provisions that prevent discrimination against faith-based organizations. Finally it aims to protect the religious freedom of people who receive aid, and preserve religious hiring rights of faith-based charities. Now, churches, synagogues, mosques, and other religious groups can get federal funding without being discriminated against simply because they are a religious organization. These institutions help the community and the general welfare of its people. Just as it was in Bowen v. Kendrick, this act is reasonable because it is for the common good of the people. In conclusion, I believe that the First Amendment does not require that government to remain neutral between religions, as long as the natural and civic rights of all citizens of all religions are equally protected.

The government may favor specific religions and religious philosophies for the betterment of the welfare and morality for themselves and the constituents it seeks to serve. I disagree with Justice Black’s interpretation of the First Amendment. He once said, “I am for the First Amendment from the first word to the last. I believe it means what it says.”, however, I believe this; “Congress shall make no law respecting an establishment of religion,” is what the First Amendment says and that is exactly what it means.

2019-1-4-1546631877

Should we fight against tort reform?: essay help site:edu

The controversy around tort reform has turned into a two-sided debate between citizens and corporates. With the examination of various cases in recent years, it is clear that the effects of tort reform have proven to be negative for both sides. This issue continues to exist today, as public relations and legislature show a clear difference in opinion. In the event that tort reform occurs, victims and plaintiffs will be prevented from being fully replenished from the harm and negativity that they suffered, making this process of the civil justice system unfair.

In the justice system, there are two forms of law: criminal law, and civil law. The most well known form of law is probably criminal law. Criminal law is where the government (prosecutor) fights a defendant regarding a crime that may or may not have been committed. Contrary to this, civil law has a plaintiff and a defendant who fight over a tort. As stated in the dictionary, a tort is “a wrongful act or an infringement of a right (other than under contract) leading to civil legal liability”. In hindsence, a tort correlates to that of a crime in a criminal case.

Tort reform refers to the passing legislature or when a court issues a ruling that limits in some way the rights of an injured person to seek compensation from the person who caused the accident (“The Problems…Reform”). Tort reform also includes subtopics such as public relations campaign, caps on damages, judicial elections, and mandatory arbitration. Lawmakers across the United States have been heavily involved with tort reform since the 1950s, and it has only grown in popularity since then. Ex-president George W. Bush urged Congress to make reform in 2005 and brought tort reform to the table like no other president.

The damages that are often referred to in civil lawsuits are economic damages and non-economic damages. An economic damage is any cost that is a result of the defendant’s actions. For example, medical bills or money to repair things. Non-economic damages refer to emotional stress, post-traumatic stress disorder, and other impacts not related to money. A cap on damages “limits the amount of non-economic damage compensation that can be awarded to a plaintiff” (US Legal Inc).

Caps on damages are the most common practice of tort reform. In New Mexico, Susan Seibert says that she was hospitalized for more than nine months because of a doctor messing up during her gynecological procedure. After suing, she was supposed to receive $2.6 million in damages, which was then reduced to $600,000 because of a cap on damages. Seibert still suffers from excessive amounts of debt as a result of not being given the proper amount of money that she deserved. Caps on damages highly impacts the plaintiffs in a case. As priorly mentioned, plaintiffs sue because they need money in order to fully recover from the hardship in which they endured as a result of the defendants actions.

A type of tort reform that is not as well known is specialized medical courts. Currently, all medical malpractice courts have juries that have little to no background regarding medical information. This has been working very well because it means that an unbiased verdict is decided. However, the organization Common Good is trying to pass the creation of special medical courts. In this, the jury and judge will be trained medical professionals who will deeply evaluate the case. Advocates for this court feel that people will be better compensated for what they really deserve. However, the majority of the opinions on this court are against the idea of ths. The most concluded opinion of those who oppose this new system believe that it would put the patients at a disadvantage. It is more likely that the trained medical judges and juries will side with the doctor/surgeon/defendant than siding with the plaintiff. They believe that the most fair and efficient way to judge medical malpractice cases would be to use the existing civil justice system. One of the most famous medical malpractice cases involving Dana Carvey was ended in a settlement, but could have been much worse for Carvery if the judge and jury had been medical professionals. Carvey was receiving a double bypass and had a surgeon that operated on the wrong artery. In the event that this case went to a medical court, it is easily predictable that the verdict would have been that the doctor made a “just” mistake. The jury would have said that this mistake was nothing that was easily preventable, and it was something that could have been assumed as a risk going into the surgery. However, this case did not go to court, rather, it ended in a $7.5 million settlement.

Another form of tort reform is mandatory arbitration. Mandatory arbitration, as said in the article, “Mandatory Arbitration Agreements in Employment Contracts”, is “a contract clause that prevents a conflict from going to a judicial court”. This has affected many employers who have experienced sexual harassment, stealing of wages, racial discrimination, and more. Often times, “employees signed so-called mandatory arbitration agreements that are the new normal in American workplaces” (Campbell). These agreements are found under stacks of thousands of papers that have to be signed throughout the hiring process. The manager will force the new employee to sign these documents. Most of the time, these documents will not be called “Mandatory Arbitration Agreement”, rather, they could be called legalese names like “Alternative Dispute Resolution Agreement” (Campbell). “Between employee and employer, this means that any conflict must be solved through arbitration” (“Mandatory Arbitration Agreements in Employment Contracts”). When a conflict is solved through arbitration, “neutral arbiters” go through the evidence that the company/client present, and those arbiters decide what they think the just outcome should be, whether that is money, loss of a job, and more. This decision is known to be called the arbitration award.

A place where the effects of mandatory arbitration can be seen is the #MeToo movement. With the rise of this moment, more and more women have been coming out about their experiences with sexual harassment in the workplace. These women are then encouraged to fight against their harasser. Ultimately, many of these woman find out that they are not allowed to sue because of the mandatory arbitration agreements that they signed during the process of being hired into the job. In fact, Debra S. Katz wrote an article for The Washington Post called “30 million women can’t sue their employer over harassment”, proving how widespread the issue is. Evidently, this form of tort reform ruins the lives of over 30 million people annually. These woman could be suffering from post traumatic stress disorder, truma, and more from their experiences with sexual harassment. In the event that this form of tort reform is not banished, more and more woman will be suffering from mandatory arbitration.

By limiting the amount of money and reparations that a defendant will have to pay a plaintiff, tort reforms benefit major corporations. However, on the opposite side of this, the plaintiff suffers extremely from these limitations. In many cases, a plaintiff will be suing because they need the money to recover fully from the event that took place. For example, in the documentary “Hot Coffee”, many tort cases were discussed. Throughout the cases, there were occurrences in which the plaintiff suffered from the current regulations regarding caps, mandatory arbitration, and more. Tort reform would further exacerbate the negatives of modern day civil court cases.

Groups such as the American Tort Reform Association (ATRA) and Citizens Against Law Abuse (CALA) have also been active in fighting for tort reform. Along with these suspicions, other issues with tort reform such as the fairness behind caps on damages have exposed inequity in the civil justice system. Supporters of tort reform have been rallying for a common goal: to limit the ability of citizens to take advantage of the litigation process to protect businesses and companies.

Victims and plaintiffs will be prevented from receiving the reparations that they deserve as a result of hardship, negativity, and suffrage from the defendant’s actions in the event that tort reform occurs. Caps on damages, special medical malpractice courts, and mandatory arbitration are just a few of the negative impacts that tort reform will allow. Victims and plaintiffs sue the defendant to be able to receive the full compensation that they deserve. It is hard enough as it is to fight against these major corporations, and tort reform would further exacerbate that. Americans have the right to a fair trial, and the implication of tort reform would take away that constitutionally given right. It is essential that Americans continue to fight against tort reform, as you never know if you may become the next victim.

2019-1-6-1546809813

Chinese suppression of Hong Kong

Would you fight for democracy? Its core principles are the beating heart of our society: providing us with representation, civil rights and freedom — empowering our nation to be just and egalitarian. However, whilst we cherish our flourishing democracy, we have blatantly ignored one of the most portentous democratic crises of our time. The protests in Hong Kong. Sparked by a proposed bill allowing extradition to mainland China, the protests have ignited the city’s desire for freedom, democracy and autonomy; and they have blazed into a broad pro-democracy movement, opposing Beijing’s callous and covert campaign to suppress legal rights in Hong Kong. But the spontaneity fueling these protests is fizzling out, as minor concessions fracture the leaderless movement. Without external assistance, this revolutionary campaign could come to nothing. Now, we, the West, must support protesters to fulfill our legal and moral obligations, and to safeguard other societies from the oppression Hong Kongers are suffering. The Chinese suppression of Hong Kong must be stopped.

Of all China’s crimes, its flagrant disregard for Hong Kong’s constitution is the most alarming. When Hong Kong was returned to China in 1997, the British and Chinese governments signed the Sino-Brititish Joint Declaration, allowing Hong Kong “a high degree of autonomy, except in foreign and defence affairs” until 2047. This is allegedly achieved through the “one country, two systems” model, currently implemented in Hong Kong. Nevertheless, the Chinese government — especially since Xi Jinpin seized power in 2013 — is relentlessly continuing to erode legal rights in our former colony. For instance, in 2016, four pro-democracy lawmakers — despite being democratically elected — were disqualified from office. Amid the controversy surrounding the ruling lurked Beijing, using its invisible hand to crush the opposition posed by the lawmakers. However, it is China’s perversion of Hong Kong’s constitution, the Basic Law, that has the most pronounced and crippling effect upon the city. The Basic Law requires Hong Kong’s leader to be chosen “by universal suffrage upon nomination by a broadly representative nominating committee”; but this is strikingly disparate to reality. Less than seven percent of the electoral register are allowed to vote for representatives in the Election Committee — who actually choose Hong Kong’s leader — and no elections are held for vast swathes of seats, which are thus dominated by pro-Beijing officials. Is this really “universal suffrage”? Or a “broadly representative” committee? This “pseudo-democracy” is unquestionably a blatant violation of our agreement with China. If we continue to ignore the subversion of the fundamental constitution holding Hong Kong together, China’s grasp over a supposedly “autonomous” city will only strengthen. It is our legal duty to hold Beijing to account for these heinous contraventions of both Hong Kong’s constitution and the Joint Declaration — which China purports to uphold. Such despicable and brazen actions, whatever the pretence, cannot be allowed to continue.

The encroachment of their fundamental human rights is yet another travesty. Over the past few years, the Chinese government has been furtively extending its control over Hong Kong. Once, Hong Kongers enjoyed numerous freedoms and rights; now, they silently suffer. Beijing has an increasingly pervasive presence in Hong Kong, and, emboldened by a lack of opposition, it is beginning to repress anti-chinese views. For example, five booksellers, associated with one Hong Kong publishing house, disappeared in late 2015. The reason? The publishing house was printing a book — which is legal in Hong Kong — regarding the love-life of the Chinese president Xi Jinpin. None of the five men were guilty; all five men later appeared in custody in mainland China. One man even confessed on state television, obviously under duress, to an obscure crime he “committed” over a decade ago. This has cast a climate of paranoia over the city, which is already forcing artists to self-censor for fear of Chinese retaliation; if left unchecked, this erosion of free speech and expression will only worsen. Hong Kongers now live with uncertainty as to whether their views are “right” or “wrong”; is this morally acceptable to us? Such obvious infringements of rights to free speech are clear contraventions of the core human rights of people in Hong Kong. Furthermore, this crisis has escalated with the protests, entangling violence in the political confrontations. Police have indiscriminately used force to suppress both peaceful and violent protesters, with Amnesty International reporting “Hongkongers’ human rights situation has violations on almost every front”. The Chinese government is certainly behind the police’s ruthless response to protesters, manipulating its pawns in Hong Kong to quell dissent. This use of force cannot be tolerated; it is a barefaced oppression of a people who simply desire freedom, rights and democracy and it contradicts every principle that our society is founded upon. If we continue abdicating responsibility for holding Beijing to account, who knows how far this crisis will deteriorate? Beijing’s oppression of Hong Kongers’ human rights will not disappear. Britain — as a UN member, former sovereignty of Hong Kong and advocate for human rights — must make a stand with the protesters, who embody the principles of our country in its former colony.

Moreover, if we do not respond to these atrocities, tyrants elsewhere will only be emboldened to further strengthen their regimes. Oligarchs, autocrats and dictators are prevalent in our world today, with millions of people oppressed by totalitarian states. For instance, in India, the Hindu nationalist government, headed by Narendra Modi, unequivocally tyrannize the people of Kashmir: severing connections to the internet, unlawfully detaining thousands of people and reportedly torturing dissidents. The sheer depravity of these atrocities is abhorrent. And the West’s reaction to these barbarities? We have lauded and extolled Modi as, in the words then-president Barack Obama, “India’s reformer in chief”, apathetic to the outrages enacted by his government. This exemplifies our seeming lack of concern for other authoritarian regimes around the world: from our passivity towards the Saudi Arabian royal family’s oppressive oligarchy to our unconcern about the devilish dictatorship of President Erdoğan in Turkey. Our hypocrisy is irrefutable; this needs to change. The struggle in Hong Kong is a critical turning point in our battle against such totalitarian states. If we remain complacent, China will thwart the pro-democracy movement and Beijing will continue to subjugate Hong Kong unabashed. Consequently, tyrants worldwide will be emboldened to tighten their iron fists, furthering the repression of their peoples. But, if we support the protesters, we can institute a true democracy in Hong Kong. Thus, we will set a precedent for future democracies facing such turbulent struggles in totalitarian states, establishing an enduring stance for Western democracies to defend. But to achieve this, we must act decisively and immediately to politically pressure Beijing to make concessions, in order to create a truly autonomous Hong Kong.

Of course, the Chinese government is trying to excuse their actions. They claim to be merely maintaining order in a city of their country, while Western powers fuel protests in Hong Kong. Such fabrications from Chinese spin-doctors are obviously propaganda. There is absolutely no evidence to corroborate their claim of “foreign agents” sparking violence in Hong Kong. And, whilst some protesters are employing aggressive tactics, their actions are justified: peaceful protests in the past, such as the Umbrella Movement of 2014, yielded no meaningful change. Protesters are being forced to violence by Beijing, who are stubborn to propose any meaningful reforms.

Now, we face a decision, one which will have profound and far-reaching repercussions for all of humanity. Do we ignore the egregious crimes of the Chinese government, and in our complacency embolden tyrants worldwide? Or do we fight? Hong Kongers are enduring restricted freedoms, persecution and a perversion of their constitution; we must oppose this oppression resolutely. Is it our duty to support the protesters? Or, is democracy not worth fighting for?

2019-10-11-1570808349

Occurrence and prevalence of zoonoses in urban wildlife: essay help online free

A zoonosis is a disease that can be transmitted from animals to humans. Zoonoses in companion animals are known and described extensively. A lot of research has already been done, Rijks et al (2015) for example lists the 15 diseases of prime public health relevance, economical importance or both (Rijks(1)). Sterneberg-van der Maaten et al (2015) composed a list of the 15 priority zoonotic pathogens, which includes the rabies virus, Echinococcus granulosus, Toxocara canis/cati and Bartonella henselae (Sterneberg-van der Maaten(2)).

Although the research is extensive the knowledge about zoonoses and hygiene instruction of owners, health professionals and other related professions, like pet shop employees, is low. According to Van Dam et al (2016) (3)77% of the pet shop employees does not know what a zoonosis is and just 40% of the pet shops has a protocol for hygiene and disease prevention. 27% of the pet shops and asylums give instruction to their clients about zoonoses. It may therefore be assumed that the majority of the public is unaware of the health risks involving companion animals like cats and dogs. Veterinarians give information about responsible pet ownership and the risks when the pet owner visits the clinic (Van dam(3), Overgaauw (4)). In other words, dissemination obtained from research has not occurred effectively.

However, urban areas are not only populated with domestic animals. There is also a variety of non- domesticated animals living in close vicinity of domesticated animals and the human population, the so-called the urban wildlife. Urban wildlife is defined as any animal that has not been domesticated or tamed and lives or thrives in an urban environment (freedictionary(5)). Just like companion animals, urban wildlife carries pathogen that are zoonotic, for example Echinococcus multilocularis. This is a parasite that can be transmitted from foxes to humans. Another example is the rabies virus, which is transmitted by hedgehogs and bats. Some zoonotic diseases can be transmitted to humans from different animals. Q-fever occurs in mice, foxes, rabbits and sometimes even in companion animals.

There is little knowledge about the risk factors that influence the transmission of zoonoses in urban areas (Mackenstedt(6)). This is mostly due to the lack of active surveillance of carrier animals. This surveillance requires fieldwork, which is expensive and time-consuming. Often there is no immediate result for public-health authorities. This is why surveillance often is initiated during or after an epidemic (Heyman(7)). Meredith et al (2015) mentioned that due to the unavailability of a reliable serological test, for many species it is not yet know what the contribution is to the transmission to human (Meredith(8)).

The general public living in urban areas is largely unaware of the diseases transmitted from the urban wildlife that is present in their living area (Himsworth(9)), (Heyman(7)), (Dobay(10)), (Meredith(8)). Since all these diseases can also be a risk for the public health and the public may need to be informed of these risks.

The aim of this study is to determine the occurrence and prevalence of zoonoses in urban wildlife. To do this, the ecological structure of an European city will be investigated first, to determine wildlife living in the urban areas. Secondly, an overview of the most common and important zoonoses in companion animals will be discussed. Followed by zoonoses in urban wildlife.

2. Literature review

2.1 Ecological structure of the city

Humans and animals live closely together in cities. Both companion animals and urban wildlife share the environment with humans. Companion animals are important to human society. They perform working roles (dogs for hearing of visually impaired people) and they play a role in human health and childhood development (Day(11)).

A distinction can be made between animals that live in the inner city and animals that live in the outskirts of the city. The animals that live in the majority of the European inner cities are: brown rats, house mice, bats, rabbits and different species of birds. Those living outside of the stone inner city are other species of mice, hedgehogs, foxes and moles (Auke Brouwer(12)). In order to create safe passage for this particular group of animals, ecological structures are created. The structure also includes wet passageways for amphibia and snakes and dry passageways like underground tunnels, special bridges and cattle grids (Spier M(13)).

A disadvantage of human and animals living in close vicinity of each other is the possibility of transmitting diseases (Auke Brouwer(12)). Diseases can be transmitted from animals to humans in different ways. A few examples are: through eating infected food, inhalation of aerosols, via vectors or fecal-oral contact (WUR(14)). The most relevant ways of transmission for this review are: indirect physical contact (e.g. contact with contaminated surface), direct physical contact (touching an infected person or animal), through skin lesions, fecal-oral transmission and airborne transmission (aerosols). In the following section an overview of significant zoonoses of companion animals will be described. This information will enable a comparison with urban wildlife zoonoses later in this review.

2.2 Zoonoses of cats and dogs

There are many animals living in European cities. Both companion animals and urban wildlife. 55- 59% of the Dutch households has one or more companion animals (van Dam(3)). This includes approximately 2 million dogs and 3 million cats (RIVM(15)). In all of Europe live approximately 61 million dogs and 66 million cats. Owning a pet has many advantages, but companion animals are also able to transmit diseases to humans (Day(11)). In the following section significant zoonoses for companion animals will be described.

A. Bartonellosis (cat scratch disease)

Bartonellosis is an infection by Bartonella henselae or B. clarridgeiae. Most infections in cats are thought to be subclinical. If disease does occur, the symptoms are mild and self-limiting, characterized by lethargy, fever, gingivitis, uveitis and nonspecific neurological signs (Weese JS(16)). The seroprevalence in cats is 81% (barmettler(17)).

Humans get infected by scratches or bites and sometimes by infected fleas and ticks. In the vast majority of cases, the infection is also mild and self-limiting. The clinical signs in humans include development of a papule at the site of inoculation, followed by regional lymphadenopathy and mild fever, generalized myalgia and malaise. This usually resolves spontaneously over a period of weeks to months (Weese JS(16)).

Few cases of human bartonella occur in The Netherlands. Based on laboratory diagnosis done by the RIVM, the bacteria causes 2 cases per 100.000 humans each year. However, this could be ten times higher, since the disease is mild and self-limiting most of the time, so most people do not visit a health care professional (RIVM(18)).

B. Leptospirosis

This disease is caused by the bacteria Leptospira interrogans. According to Weese et al (2002) leptospirosis is the most widespread zoonotic disease in the world. The bacteria can infect a wide range of animals (Weese(16)).

Leptospirosis is in dogs and cats a relatively small zoonosis. It is not know exactly how many dogs are infected annually subclinically or asymptomatically, but according to Houwers et al (2009), each year around 10 cases occur in The Netherlands (Houwers(19)). RIVM states that each year 0,2 cases per 100.000 humans occur (RIVM(20)).

Infection in dogs is called Weill’s disease. Clinical signs can be peracute, acute, subacute and chronic. A peracute infection usually results in in sudden death with few clinical signs. Dogs with an acute infection are icteric, have diarrhea, vomit and may experience peripheral vascular collapse. The subacute form is generally manifested as fever, vomiting, anorexia, polydipsia, dehydration and in some cases severe renal disease can develop. Symptoms of a chronical infections are: fever of unknown origin, unexplained renal failure, or hepatic disease and anterior uveitis. The majority of infections in dogs are subclinical or chronic. In cats clinical disease is infrequent (Weese(16)).

According to Barmettler et al (2011), the risk of transmission of Leptospira from dogs to humans is just theoretical. All tested humans were exposed to infected dogs, but all were seronegative to the bacteria (Barmettler(17)).

The same bacteria that causes leptospirosis in dogs is responsible for the disease in rats, namely Leptospira interrogans. This bacteria is considered the most widespread zoonotic pathogen in the world and rats are the most common source of human infection, especially in urban areas (Himsworth(21)). According to the author, the bacteria asymptomatically colonizes the rat kidney and the rats shed the bacteria via the urine (Himsworth(9)). Bacteria can survive outside the rats for some time, especially in a warm and humid environment (RIVM(20)).

People become infected through contact with urine, or through contact with contaminated soil or water (Himsworth (21)). The Leptospira-bacteria can enter the body via the mucous or open wounds (Oomen(22)). The symptoms and severity of disease can be highly variable, ranging from asymptomatic to sepsis and death. Common complaints are: headache, nausea, myalgia and vomiting. Moreover, neurologic, cardiac, respiratory, ocular and gastrointestinal manifestations can occur (Weese JS(16)).

The prevalence in rats differs between cities and even between locations in the same city. Himsworth (2013) states that in Vancouver 11% of the tested rats was positive for Leptospira (Himsworth(9)). Another study by Easterbrook (2007) found 65,3% of all tested rats in Baltimore to be positive for the bacteria (Easterbrook(23)). Krojgaard (2009) found a prevalence between 48% and 89% in different location in Copenhagen (Krojgaard(24)).

C. Dermatophytosis (ringworm)

Dermatophytosis is a fungal dermatologic disease, caused by Microsporum spp. or Trichophyton spp. It causes disease in a variety of animals (Weese(16)). According to Kraemer (2012), the dermatophytes that occur in rabbits are Trichophyton mentagrophytes and Microsporum canis. Although the former is more common(Kraemer(25)).

Dermatophytes live in keratin layers of the skin and cause ringworm. They depend on human or animal infection for survival. Infection occurs through direct contact between dermatophyte arthrospores and keratinocytes/hairs. Transmission through indirect contact also occurs, for example through toiletries, furniture or clothes (Donnelly(26), RIVM(18)). Animals (especially cats) can transmit M. canis infection while remaining asymptomatic (Weese JS(16)).

The symptoms in both animals and humans can vary from mild or subclinical to severe lesions similar to pemphigus foliaceus (itching, alopecia and blistering). The skin lesions develop 1-3 weeks after infection(Weese JS). Healthy, intact skin cannot be infected, but only mild damage is required to make the skin susceptible to infection. No living tissue is invaded, only the keratinized stratum corneum is colonized. However, the fungus does induce an allergic and inflammatory eczematous response in the host (Donelly(26), RIVM(18)).

Dermatophytosis is not commonly occurring in humans. RIVM states that each year, 3000 per 100.000 humans get infected. Children between the age of 4 and 7 are the most susceptible to the fungal infection. In cats and dogs, the prevalence of M. canis is much higher: 23,3% according to Seebacher(27). The prevalence in rabbits is 3.3% (d’Ovidio(28)).

D. Echinococcosis

Echinococcus granulosus can be transmitted from dogs to humans. Dogs are the definitive hosts, while herbivores or humans are the intermediate hosts. Dogs can become infected by eating infected organs, for example from sheep, pigs and cattle (RIVM(29)) . The intermediate hosts develop a hydatid cyst with protoscoleces after ingesting eggs produced and excreted by definitive hosts. The protoscoleces evaginate in the small intestine and attach there(MacPherson(30)).

In most parts of Europe, Echinococcus granulosus occurs occasionally. However, in Spain, Italy, Greece, Romania and Bulgaria the bacteria is highly endemic.

Animals, either as definitive or as intermediate hosts, rarely show symptoms.

Humans, on the other hand, can show symptoms, depending on the size and site of the cyst and the growth rate. The disease can become life-threatening if a cyst in lungs or liver bursts. In that case a possible complication is an anaphylactic shock (RIVM(29)).

In the Netherlands, echinoccosis rarely occurs in humans. Between 1978 and 1991, 191 new patients were diagnosed, but it is not known how many of these were new cases. The risk of infection is higher in the case of bad hygiene and living closely together with dogs (RIVM(29)). In a study done by Fotiou et al (2012) the prevalence of Echinococcus granulosus is 1,1% (Fotiou(31)). The prevalence in dogs is much higher: 10,6% according to Barmettler et al (17).

E. Toxocariasis

Toxocariasis is caused by Toxocara canis or Toxocara cati. Toxocara is present in the intestine of 32% of all tested dogs, 39% of tested cats and 16%-26% of tested red foxes (Luty(32), LETKOVÁ(33)). In dogs younger than 6 weeks the prevalence can be up to 80% (Kantere) and in kittens of 4-6 months old it can be 64% (Luty(32)). The host becomes infected by swallowing the parasites embryonated eggs (Kantere(34)).

Dogs and red foxes are the definitive host of T. canis, cats of T. cati (Luty(32)). Humans are paratenic hosts. After ingestion, the larvae hatch in the intestine and migrate all over the body via blood vessels (visceral larva migrans). In young animals the migrations occurs via the lungs and trachea. After swallowing, the larvae mature in the intestinal tract.

In paratenic hosts and adult dogs that have some degree of acquired immunity, the larvae undergo somatic migration. There they remain as somatic larvae in the tissues. If dogs eat a Toxocara-infected paratenic host, larvae will be released and develop to adult worms in the intestinal tract (MacPherson(30)).

Humans can be infected by oral ingestion of infective eggs from contaminated soil, from unwashed hands or consumption of raw vegetables (MacPherson(30)).

The clinical symptoms in animals depend on the age of the animal and number, location and stage of development of worms. After birth, puppies can suffer from pneumonia because of tracheal migration and die in 2-3 days. 2-3 weeks after birth, puppies can show emaciation and digestive disturbance because of mature worms in the intestine and stomach. Clinical signs are: diarrhea, constipation, coughing, nasal discharge and vomiting.

Clinical symptoms in adult dogs are rare(MacPherson(30)).

In most human cases following infection by small numbers of larvae, the disease occurs without symptoms. Mostly children do get infected. VLM is mainly diagnosed in children of 1-7 years old. The symptoms can be general malaise, fever, abdominal complaints, wheezing or coughing. Severe clinical symptoms are mainly found in children of 1-3 years old.

Most of the larvae seem to be distributed to the brain and can cause neurological disease. Larvae do not migrate continuously. They rest periodically, and during such periods they induce an immunologically mediated inflammatory response (MacPherson(30)).

The prevalence in children is much lower than in adults, respectively 7% and 20%. The risk of infection with Toxocara spp. increases with bad hygiene (Overgaauw(36)). In the external environment, the eggs survive for months and consequently toxocariasis represents a significant public health risk (Kantere(34)) . High rates of soil contamination with toxocara eggs are demonstrated in parks, playgrounds, sandpits and other public places. Direct contact with infected dogs is not considered as a potential risk for human infection, because embryonation to the stage of infectivity requires a minimum of 3 weeks (MacPherson(30)).

F. Toxoplasmosis

Toxoplasmosis is caused by the protozoa Toxoplasma gondii. Cats are the definitive hosts and other animals and humans act as intermediate hosts. Infected cats excrete oocysts in the feces. These oocysts end up in the environment, where they are ingested by intermediate hosts (direct or indirect via food or water). In the intermediate hosts the protozoa migrates until it gets stuck. It is then encapsulated and stays at that place. If cats eat infected intermediate hosts they become infected.

Animals rarely show symptoms, although some young cats get diarrhea, encephalitis, hepatitis and pneumonia.

In most humans, infection is asymptomatic. Pregnant women can transmit the protozoa through the placenta and infect the unborn child. The symptoms in the child depend on the stage of pregnancy. An infection in early stages leads to severe deviations and in many cases to abortion. If the infection occurs in a later stage, premature birth is seen and symptoms of an infectious disease (fever, rash, icterus, anemia and an enlarged spleen or liver). Although, in most cases the symptoms start after birth. Most damage is done in the eyes (RIVM(37)).

Based on data of the RIVM and Overgaauw (1996) the disease that is most commonly transmitted to humans is toxoplasmosis. The prevalence was 40,5% in 1996. This number is reduced in the last few decades and Jones (2009) states that in 2009 the prevalence was 24,6% (Jones(38)). The prevalence rises with age, being 17,5% in humans younger than 20 years, and 70% in humans of 65 years and older. There is no increased risk of getting an infection if humans have a cat as a pet (RIVM(37)). Birgisdottir et al (2006) studied the prevalence in cats in Sweden, Estonia and Iceland. They found a prevalence of 54,9% , 23% and 9,8%, respectively in Estonia, Sweden and Iceland (Birgisdottir(39)).

G. Q-fever

The aetiological agent of Q-fever is the bacteria Coxiella burnetti. The bacteria has a very wide host range, including ruminants, birds and mammals such as small rodents, dogs, cats and horses. Accordingly, there is a complex reservoir system (Meredith(8)).

The extracellular form of the bacteria is very resistant, therefore it can be persistent in the environment for several weeks. It can also be spread by the wind, so direct contact with animals is not required for infection. Coxiella burnetti is found in both humans and animals in the blood, lungs, spleen, liver and during pregnancy in large quantities in the placenta and mammary glands. It is shed in urine and feces and during pregnancy in the milk (Meredith(8)).

Humans that live close to animals (like in the city) have a higher risk to get infected, since the mode of transmission is aerogenic or direct contact. The bacteria is excreted through the urine feces, placenta or amnionic fluid. After drying, it is aerogenically spread (RIVM(40)). Acute infection is characterized by atypical pneumonia and hepatitis and in some cases transient bacteraemia. The bacteria then haematogenously spreads, which results in an infection in the liver, spleen, bone marrow, reproductive tract and other organs. This is followed by the formation of granulomatous lesions in the liver and bone marrow and development of an endocarditis involving the aortic and mitral valve (Woldehiwet(41)).

On the other hand, there is little information about the clinical signs of Q fever in animals, but variable degrees of granulomatous hepatitis, pneumonia, or bronchopneumonia have been reported in mice (Woldehiwet(41)). In pregnant animals, abortion or low foetal birth weight can occur (Meredith(8), Woldehiwet(41)).

The prevalence in the overall human population in Europe is not high (2,7 %), but in risk groups like veterinarians, the prevalence can be as high as 83% (RIVM(40)).

Meredith et al, have developed a modified indirect ELISA kit adapted for use in multiple species. They tested the prevalence of C. burnetii in wild rodents (band vole, field vole and wood mouse), red foxes and domestic cats in the United Kingdom. The prevalence in the rodents was overall 17,3%. In cats it was 61.5% and in foxes 41,2% (Meredith(8)). In rabbits, the prevalence was 32,3% (González-Barrio(42)).

H. Pasteurellosis

Pasteurellosis is caused by Pasteurella multocida. This is a coccobacillus found in the oral, nasal and respiratory cavities of many species of animals (dog, cats, rabbits, etc). It is one of the most prevalent commensal and opportunistic pathogens in domestic and wild animals (Wilson(43), Giordano(44)). Human infections are associated with animal exposure, usually after animal bites or scratches (Giordano(44)). Kissing or licking of skin abrasions or mucosal surfaces of animals can also lead to infection. Transmission between animals is through direct contact with nasal secretions. (Wilson(43)).

In both animals and humans Pasteurella multocida causes chronic or acute infections that can lead to significant morbidity with symptoms of pneumonia, atrophic rhinitis, cellulitis, abscesses, dermonecrosis, meningitis and/or hemorrhagic septicaemia. In animals the mortality is significant, but not in human. This is probably due to the immediate prophylactic treatment of animal bite wounds with antibiotics. (Wilson(43))

Disease in animals appears as a chronic infection in nasal cavity, paranasal sinuses, middle ears, lacrimal and thoracic ducts of the lymph system and lungs. Primary infections with respiratory viruses or Mycoplasma species predisposes to a Pasteurella infection (Wilson(43)).

The incidence in humans is 0,19 cases per 100.000 humans (Nseir(45)). The prevalence in dogs and cats is 25-42% (Mohan(46)). The only known prevalence in rabbits is a 29,8% in laboratory animal facilities (Kawamoto(47)).

The majority of the human population lives in cities. As a result of this, in some countries the urban landscape encompasses more than half of the land surface. This leaves little space for the wildlife species living in the country. Some species are nowadays found more in urban areas than in their native environment. They have adapted to the urban ecosystems. This is a positive aspect for biodiversity in the cities. On the other hand, just like companion animals, this urban wildlife can transmit disease to humans (Dearborn(49)). In the following section, significant zoonoses of urban wildlife will be described.

A. Zoonoses of rats

The following zoonoses occur urban rats: Leptospirosis (see 2.2B) and rat bite fever.

Rat bite fever

The rat bite fever is caused by Streptobacillus moniliformis or S. minis(Chafe(50)). These bacteria are part of the normal oropharyncheal flora of the rat and it is thought to be present in rat populations worldwide.

Since the bacteria are part of the normal flora, the rats are not susceptible to the bacteria. In people, on the other hand, the bacteria can cause rat bite fever. The transmission occurs through the bite of an infected rat and through ingestion of contaminated food. The latter causes Haverhill fever.

The clinical symptoms are fever, chills, headache, vomiting, polyarthritis and skin rash. In Haverhill fever pharyngitis and vomiting may be more pronounced. If not treated, S. moniliformis infection can progress to septicemia with a mortality rate of 7-13% (Himsworth(21)).

The prevalence of Streptobacillus spp. in rats is 25% (Gaastra(51)). According to Trucksis et al (2016), rat bite fever is very rare in humans. Only a few cases each year occur (Trucksis(52)).

B. Zoonoses of mice

The zoonotical diseases that occur in mice are: hanta viruses, lymphocytic choriomeningitis, tularemia and Q-fever (see 2.2 G).

Hanta viruses

There are different types of hanta viruses, each carried by a specific rodent host species. In Europe, three types occur: Puumala virus(PUUV), carried by bank vole; Dobrava virus(DOBV), carried by yellow-necked mouse; Saaremaa virus(SAAV), carried by the striped field mouse (Heyman(7)). SAAV has been found in Estonia, Russia, South-Eastern Finland, Germany, Denmark, Slovenia and Slovakia. PUUV is very common in Finland, Northern Sweden, Estonia, the Ardennes Forest Region, parts of Germany, Slovenia and in parts of European Russia. DOBV has been found in The Balkans, Russia, Germany, Estonia and Slovakia (Heyman(7)).

Hantaviruses are transmitted via direct and indirect contact. Infective particles fare secreted in feces, urine and saliva (Kallio(53)).

The disease is asymptomatic in mice (Himsworth(21)). Humans on the other hand do get symptoms. All types of the Hanta virus cause hemorrhagic fever with renal syndrome (HFRS), but they differ in severity. HFRS is characterized by acute onset, fever, headache, abdominal pains, backache, temporary renal insufficiency and thrombocytopenia. In DOBV the extent of hemorrhages, requirement for dialysis treatment, hypotension and case-fatality rates are much higher than in PUUV or SAAV. Mortality is very low (approximately 0.1%)(Heyman(7)).

Hanta viruses are an endemic zoonosis in Europe. Tens of thousands of people get infected each year (Heyman(7)). The prevalence in mice is 9,5% (Sadkowska(54)).

Lymphocytic choriomeningitis

Lymphocytic choriomeningitis is a viral disease, caused by an arena virus (Cahfe(50)). The natural reservoirs of arenaviruses are rodent species. They are asymptomatically infected (Oldstone(55)).

In humans the disease is characterized by varying signs, from inapparent infection to the acute, fatal meningoencephalitis. The transmission of the disease is through mice bites and material contaminated with excretions and secretions of infected mice (Cahfe(50)).

The virus causes little or no toxicity to the infected cells. The disease- and associated cell and tissue injury- are caused mostly by activity of the hosts immune system. The antiviral response produces factors that act against the infected cells and damage them. Another factor is the displacement of cellular molecules that are normally attached to cellular receptors by viral proteins. This could result in conformational changes, which causes the cell membrane to become fragile and interfere with normal signalling events (Oldstone(55)).

The prevalence of lymphocytic choriomeningitis in human is 1,1 %(Lledó(56). In mice, the prevalence is 2,4% (Forbes(57)).

Tularemia

Tularemia is caused by the bacterium Franscisella tularensis. Only few animal outbreaks have been reported and so far only one outbreak in wildlife has been closely monitored(Dobay(10)). The bacteria can infect a large number of animal species. Outbreaks among mammals and human are rare. However, outbreaks can occur when the source of infection is widely spread and/or many people or animals are exposed. Outbreaks are difficult to monitor and trace, because mostly wild rodents and lagomorphs are affected (Dobay(10)).

People get infected in five ways: ingestion, direct contact with a contaminated source, inhalation, arthropod intermediates and animal bites. In animals the route of transmission is not yet known. The research of Dobay et al(2015) suggests that tularemia can cause sever outbreaks in small rodents such as house mice. The outbreak is self-exhausting in approximately three months, so no treatment is needed (Dobay(10)).

Tularemia is a potentially lethal disease. There are different clinical manifestations, depending on the route of infection. The ulceroglandular form is the most common and occurs after handling contaminated sources. The oropharyngeal form can be caused by ingestion of contaminated food or water. The pulmonary, typhoidal, glandular and ocular forms occur less frequently (Dobay(10)), Anda(58)).

In humans the symptoms of the glandular and ulceroglandular form are cervical, occipital, axillary or inguinal lymphadenopathy. The symptoms of pneumonic tularemia are fever, cough and shortness of breath (Weber(59)). Clinical manifestation of the oropharyngeal form include adenopathies on the elbow/ armpit/both, cutaneous lesions, fever, malaise, chills and shivering, painful sore throat with swollen tonsils and enlarged cervical lymph nodes (Sahn(60), Anda(58)).

The clinical features in animals are unspecific and the pathological effects vary substantially between different animal species and geographical locations. The disease can be very acute (for example in highly susceptible species like mice), with development of sepsis, liver and spleen enlargement and pinpoint white foci in the affected organs. The subacute form can be found in moderately susceptible species like hares. The symptoms are granulomatous lesions in lungs, pericardium and kidneys.

Infected animals are usually easy to catch, moribund or even dead (Maurin(61)).

Rossow et al (2015) states that the prevalence in humans is 2% (Rossow(62)). Highest prevalence found in small mammals during outbreak in Central Europe is 3,9% (Gurycová(63)).

C. Zoonoses of foxes

The zoonosis that can be transmitted from foxes to human are Q-fever (see 2.2G), toxocariasis (see 2.2E) and echinococcus multilocularis.

Echinococcus multilocularis

This is considered one of the most serious parasitic zoonosis in Europe. The red foxes are the main definitive hosts. The natural intermediate host are voles, but a lot of animals can act as accidental hosts, for example monkeys, human, pigs and dogs. The larval stage of Echinococcus multilocularis causes Alveolar echinococcosis (AE). The infection is widely distributed in foxes, with a prevalence of 70% in some areas. RIVM states that the prevalence in The Netherlands is 10-13%. The prevalence in humans differs throughout Europe, and has to do with the prevalence in foxes. If the prevalence in foxes is high, the prevalence in human increases. However, there has not been reported a prevalence higher than 0,81 per 100.000 inhabitants (RIVM(29)). Foxes living in urban areas pose a threat to the public health and there is concern that that risk may rise due to the suspected geographical spread of the parasite (Conraths(64)).

In foxes the helminth colonizes the intestines, but it does not cause disease. In intermediate hosts and accidental hosts cysts are formed after oral intake of eggs excreted by foxes, which causes AE. The size, site and growth rate of the larval stage determine the symptoms. Most of the time, infection starts in the liver, causing local deviations. The larvae grow invasively to other organs and blood vessels. It can take five to fifteen years before clear symptoms show (RIVM(29)). In human AE is a very rare disease, but incidences have increased in recent years.

D. Zoonoses of rabbits

The zoonoses that can be transmitted from rabbits to human are: Pasteurellosis (see 2.2H), tularemia (see 2.3B), Q fever (see 2.2G), dermatophytosis (see 2.2C) and cryptosporidiosis.

Cryptosporidiosis

Cryptosporidium is a protozoa. It is considered the most important zoonotic pathogen causing diarrhea in humans and animals. In rabbits, Cryptosporidium cuniculus (rabbit genotype) is the most common genotype (Zhang(65)). Two large studies have been done in rabbits, they showed a prevalence between 0,0% and 0,9% in rabbits (Robinson(66)).

The risks of cryptosporidiosis for the public health from wildlife are poorly understood. No studies of the host range and biological features of the Cryptosporidium rabbit genotype were identified. However human-infectious Cryptosporidium (including Cryptosporidium parvum) have caused experimental infections in rabbits and there is some evidence that his occurs naturally (Robinson(66)).

In human and neonatal animals, the pathogen causes gastroenteritis, chronic diarrhea or even severe diarrhea (Zhang(65), Robinson(66)). In >98% of these cases, the disease is caused by C. hominis or C. parvum, but recently, the rabbit genotype has emerged as a human pathogen. Little is known yet about this genotype, because only a few cases in humans were reported (Robinson(66)). Since little isolates have been found in humans and little is known about human infection with Cryptosporidium rabbit genotype, Robinson et al (2008) assumed this genotype is insignificant to public health and further investigation is needed (Robinson(67)).

E. Zoonoses of hedgehogs

Hedgehogs pose a risk for a number of potential zoonotic disease, for example microbial infections like Salmonella spp, Yersinia pseudotuberculosis, Mycobacterium marinum and dermatophytosis.

Salmonellosis

Salmonellosis is the most important zoonotic disease in hedgehogs. The prevalence of Salmonella in hedgehogs is 18,9%. The infection can either be asymptomatic or symptomatic. The hedgehogs that do show symptoms can display anorexia, diarrhea and weight loss. Humans get infected through ingestion of the bacteria, after handling the hedgehog or contact with feces (Riley(68)).

The Salmonella serotypes that are associated with hedgehogs are S. tilene and S. typhimurium (Woodward(69), Riley(68)).

Clinical manifestations in human (mainly adults) of both serotypes involve self-limiting gastroenteritis (including headache, malaise, nausea, fever, vomiting, abdominal pain and diarrhea (Woodward(69))), but bacteriamia, localized and endovascular infections may also occur (Crum Cianflone(70)). Infection with S. typhimurium and S. tilene is rare in humans, approximately 0,057 per 100.000 inhabitants (CDC(71))

Yersinia pseudotuberculosis.

No clinical symptoms for Yersinia pseudotuberculosis infection in hedgehogs are described in the literature. However, this bacteria causes a gastroenteritis in humans, characterized by a self-limiting mesenteric lymphadenitis, which mimics appendicitis. Complications can occur, which include erythema nodosum and reactive arthritis (Riley(68)). Since only Riley et al (2005) reported a case concerning Y. pseudotuberculosis, no information in available yet about the prevalence in hedgehogs or humans, or about the route of transmission. Although Riley et al (2005) claim that the zoonosis in commonly occurring (Riley(68)).

Myobacterium marinum

Mycobacterium marinum infection is not common in hedgehogs. The bacteria causes systemic myocbacteriosis. The porte d’entrée of the bacteria is through a wound or abrasion in the skin and the bacteria spreads systemically through the lymphatic system. This is also the way in which hedgehogs transmit the bacteria to human; the spines of the hedgehog can cause wounds and the bacteria can enter. Symptoms in human consist of clusters of papules or superficial nodules and can be painful. (Riley(68)). No information is reported regarding the prevalence of the bacteria in hedgehogs or humans.

Dermatophytosis

Dermatophytosis has been seen in hedgehogs. The most isolated dermatophyte is Trichophyton mentagrophytes var. erinacei. Microsporum spp. have also been reported. Lesions in the hedgehog are similar to those in other species: nonpruritic , dry, scaly skin with bald patches and spine loss. Hedgehogs can also be asymptomatic carriers, and that is a risk for potential zoonotic transmission (Riley(68)).

In human, Trichophyton mentagrophytes var. erinacei causes a local rash with pustules at the edges and an intensely irritating and thickened area in centre of the lesion. This usually resolves spontaneously after 2-3 weeks (Riley(68)).

Few cases of Trichophyton mentagrophytes var. erinacei have been reported (Pierard-Franchimont(72), Schauder(73), Keymer(74)), but no prevalence is known for humans and hedgehogs.

F. Zoonoses of bats

According to Calisher et al (2009) bat viruses that are proven to cause highly pathogenic disease in human are rabies virus and related lyssaviruses, Nipah and Hendra viruses, and SARS-CoV-like virus (Calisher(75)). Only the former is relevant for this review, since Nipah and Hendra do not occur in Europe (Munir(76)) and SARS is not directly transmitted to human (Hu(77)).

Rabies virus and related lyssaviruses

The rabies virus is present in the saliva of infected animals. Accordingly, the virus is transmitted from mammals to human through a bite (Calisher(75)).

Symptoms are equal in animals and humans. The disease starts with a prodromal stage. Symptoms are non-specific, and consist of fever, itching and pain near the site of the bite wound.

Subsequently follows the furious stage. Clinical features are hydrophobia (violent inspiratory muscle spasms, hyperextension and anxiety after attempts to drink), hallucinations, fear, aggression, cardiac tachyarrhythmias, paralysis and coma.

The final stage is the paralytic stage. It is characterized by ascending paralysis and loss of tendon reflexes, sphincter dysfunction, bulbar/respiratory paralysis, sensory symptoms, fever, sweating, gooseflesh and fasciculation.

Untreaded, the disease is fatal in approximately five days after showing the first symptoms (Warrell(78)).

Lyssaviruses from bats are related to the rabies virus. There are seven lyssavirus genotypes. Some of these cause disease in human, similar to rabies. Others, on the other hand, do not cause disease. Although it is still unclear, transmission is thought to be through bites (Calisher(75)).

Since 1977 4 cases of human rabies coming from a bat bite have been reported in The Netherlands. In bats living there, the prevalence is 7% (RIVM).

2016-3-12-1457784290

Sickle-cell conditions

NORMAL HEMOGLOBIN STRUCTURE:

Hemoglobin is present in erythrocytes and is important for normal oxygen delivery to tissues. Hemoglobinopathies are disorders affecting the structure, function or production of hemoglobin.

Different hemoglobins are produced during embryonic, fetal and adult life. Each consists of a tetramer of globin polypeptide chains: a pair of ”-like chains 141 amino acids long and a pair of ”-like chains 146 amino acids long. The major adult hemoglobin, HbA has the structure ”2”2. HbF (”2”2) predominates during most of gestation and HbA2 (”2”2) is the minor adult hemoglobin.

Each globin chain surrounds a single heme moiety, consisting of a protoporphyrin IX ring complexed with a single iron atom in the ferrous state (Fe2+). Each heme moiety can bind a single oxygen molecule; a molecule of hemoglobin can transport up to four oxygen molecules as each hemoglobin contains four heme moieties.

The amino acid sequences of various globins are highly homologous to one another and each has a highly helical secondary structure. Their globular tertiary structures cause the exterior surfaces to be rich in polar (hydrophilic) amino acids that enhance solubility and the interior to be lined with nonpolar groups, forming a hydrophobic pocket into which heme is inserted Numerous tight interactions (i.e.,”1”1 contacts) hold the ” and ” chains together. The complete tetramer is held together by interfaces (i.e., ”1”2 contacts) between the ”-like chain of one dimer and the non-” chain of the other dimer. The hemoglobin tetramer is highly soluble, but individual globin chains are insoluble. (Unpaired globin precipitates, forming inclusions that damage the cell and can trigger apoptosis. Normal globin chain synthesis is balanced so that each newly synthesized ” or non-” globin chain will have an available partner with which to pair.)

FUNCTION OF HEMOGLOBIN:

Solubility and reversible oxygen binding are the two important functions which were deranged in hemoglobinopathies. Both depend mostly on the hydrophilic surface amino acids, the hydrophobic amino acids lining the heme pocket, a key histidine in the F helix and the amino acids forming the ”1”1 and ”1”2 contact points. Mutations in these strategic regions alter oxygen affinity or solubility.

Principal function of Hb is to transport oxygen and delivery to tissue which is represented most appropriately by oxygen dissociation curve (ODC).

Fig: The well-known sigmoid shape of the oxygen dissociation curve (ODC), which reflects the allosteric properties of haemoglobin.

Hemoglobin binds with O2 efficiently at the partial pressure of oxygen (Po2) of the alveolus, retains it in the circulation and releases it to tissues at the Po2 of tissue capillary beds. The shape of the curve is due to co-operativity between the four haem molecules. When one takes up oxygen, the affinity for oxygen of the remaining haems of the tetramer increases dramatically. This is because haemoglobin can exist in two configurations – deoxy (T) and oxy (R). The T form has a lower affinity than the R form for ligands such as oxygen.

Oxygen affinity is controlled by several factors. The Bohr effect (e.g. oxygen affinity is decreased with increasing CO2 tension) is the ability of hemoglobin to deliver more oxygen to tissues at low Ph. The major small molecule that alters oxygen affinity in humans is 2,3-bisphosphoglycerate (2,3-BPG; formerly 2,3-DPG) which lowers oxygen affinity when bound to hemoglobin. HbA has a reasonably high affinity for 2,3-BPG. HbF does not bind 2,3-BPG, so it tends to have a higher oxygen affinity in vivo. Increased levels of DPG, with an associated decrease in P50 (partial pressure at which haemoglobin is 50 per cent saturated), occur in anaemia, alkalosis, hyperphosphataemia, hypoxic states and in association with a number of red cell enzyme deficiencies.

Thus proper oxygen transport depends on the tetrameric structure of the proteins, the proper arrangement of hydrophilic and hydrophobic amino acids and interaction with protons or 2,3-BPG.

GENETICS OF HEMOGLOBIN:

The human hemoglobins are encoded in two tightly linked gene clusters; the ”-like globin genes are clustered on chromosome 16, and the ”-like genes on chromosome 11. The ”-like cluster consists of two ”-globin genes and a single copy of the ” gene. The non-” gene cluster consists of a single ” gene, the G” and A” fetal globin genes, and the adult ” and ” genes. The ”-like cluster consists of two ”-globin genes and a single copy of the ” gene. The non-” gene cluster consists of a single ” gene, the G” and A” fetal globin genes, and the adult ” and ” genes.

DEVELOPMENTAL BIOLOGY OF HUMAN HEMOGLOBINS:

Red cells first appearing at about 6 weeks after conception contain the embryonic hemoglobins Hb Portland (”2”2), Hb Gower I (”2”2) and Hb Gower II (”2”2). At 10’11 weeks, fetal hemoglobin (HbF; ”2”2) becomes predominant and synthesis of adult hemoglobin (HbA; ”2”2) occurs at about 38 weeks. Fetuses and newborns therefore require ”-globin but not ”-globin for normal gestation. Small amounts of HbF are produced during postnatal life. A few red cell clones called F cells are progeny of a small pool of immature committed erythroid precursors (BFU-e) that retain the ability to produce HbF. Profound erythroid stresses, such as severe hemolytic anemias, bone marrow transplantation, or cancer chemotherapy, cause more of the F-potent BFU-e to be recruited. HbF levels thus tend to rise in some patients with sickle cell anemia or thalassemia. This phenomenon probably explains the ability of hydroxyurea to increase levels of HbF in adult and agents such as butyrate and histone deacetylase inhibitors can also activate fetal globin genes partially after birth.

HEMOGLOBINOPATHIES:

Hemoglobinopathies are disorders affecting the structure, function or production of hemoglobin. These conditions are usually inherited and range in severity from asymptomatic laboratory abnormalities to death in utero. Different forms may present as hemolytic anemia, erythrocytosis, cyanosis or vaso-occlusive stigmata.

Structural hemoglobinopathies occur when mutations alter the amino acid sequence of a globin chain, altering the physiologic properties of the variant hemoglobins and producing the characteristic clinical abnormalities. The most clinically relevant variant hemoglobins polymerize abnormally as in sickle cell anemia or exhibit altered solubility or oxygen-binding affinity.

Thalassemia syndromes arise from mutations that impair production or translation of globin mRNA leading to deficient globin chain biosynthesis. Clinical abnormalities are attributable to the inadequate supply of hemoglobin and imbalances in the production of individual globin chains, leading to premature destruction of erythroblasts and RBC. Thalassemic hemoglobin

variants combine features of thalassemia (e.g., abnormal globin biosynthesis) and of structural hemoglobinopathies (e.g., an abnormal amino acid sequence).

Hereditary persistence of fetal hemoglobin (HPFH) is characterized by synthesis of high levels of fetal hemoglobin in adult life. Acquired hemoglobinopathies include modifications of the hemoglobin molecule by toxins (e.g., acquired methemoglobinemia) and clonal abnormalities of hemoglobin synthesis (e.g., high levels of HbF production in preleukemia and ” thalassemia in myeloproliferative disorders).

There are five major classes of hemoglobinopathies.

Classification of hemoglobinopathies:

CLASS HEMOGLOBINOPATHIES

1 Structural hemoglobinopathies’hemoglobins with altered amino acid sequences that result in deranged function or altered physical or chemical properties

A. Abnormal hemoglobin polymerization’HbS, hemoglobin sickling

B. Altered O2 affinity

1. High affinity’polycythemia

2. Low affinity’cyanosis, pseudoanemia

C. Hemoglobins that oxidize readily

1. Unstable hemoglobins’hemolytic anemia, jaundice

2. M hemoglobins’methemoglobinemia, cyanosis

2 Thalassemias’defective biosynthesis of globin chains

A. ” Thalassemias

B. ” Thalassemias

C. ”, ”, ” Thalassemias

3 Thalassemic hemoglobin variants’structurally abnormal Hb associated with coinherited thalassemic phenotype

A. HbE

B. Hb Constant Spring

C. Hb Lepore

4 Hereditary persistence of fetal hemoglobin’persistence of high levels of HbF into adult life

5 Acquired hemoglobinopathies

A. Methemoglobin due to toxic exposures

B. Sulfhemoglobin due to toxic exposures

C. Carboxyhemoglobin

D. HbH in erythroleukemia

E. Elevated HbF in states of erythroid stress and bone marrow dysplasia

TABLE 127

GENETICS OF SICKLE HEMOGLOBINOPATHY:

This genetic disorder is due to the mutation of a single nucleotide, from a GAG to GTG codon on the coding strand, which is transcribed from the template strand into a GUG codon. Based on genetic code, GAG codon translates to glutamic acid while GUG codon translates to valine amino acid at position 6. This is normally a benign mutation, causing no apparent effects on the secondary, tertiary, or quaternary structures of hemoglobin in conditions of normal oxygen concentration. But under conditions of low oxygen concentration, the deoxy form of hemoglobin exposes a hydrophobic patch on the protein between the E and F helices. The hydrophobic side chain of the valine residue at position 6 of the beta chain in hemoglobin is able to associate with the hydrophobic patch, causing hemoglobin S molecules to aggregate and form fibrous precipitates. It also exhibits changes in solubility and molecular stability.

These properties are responsible for the profound clinical expressions of the sickling syndromes.

HbSS disease or sickle cell anemia (the most common form) – Homozygote for the S globin with usually a severe or moderately severe phenotype and with the shortest survival
HbS/”0 thalassemia – Double heterozygote for HbS and b-0 thalassemia; clinically indistinguishable from sickle cell anemia (SCA)
HbS/”+ thalassemia – Mild-to-moderate severity with variability in different ethnicities
HbSC disease – Double heterozygote for HbS and HbC characterized by moderate clinical severity
HbS/hereditary persistence of fetal Hb (S/HPHP) – Very mild or asymptomatic phenotype
HbS/HbE syndrome – Very rare with a phenotype usually similar to HbS/b+ thalassemia
Rare combinations of HbS with other abnormal hemoglobins such as HbD Los Angeles, G-Philadelphia and HbO Arab

Sickle-cell conditions have an autosomal recessive pattern of inheritance from parents. The types of hemoglobin a person makes in the red blood cells depends on what hemoglobin genes are inherited from her or his parents. If one parent has sickle-cell anaemia and the other has sickle-cell trait, then the child has a 50% chance of having sickle-cell disease and a 50% chance of having sickle-cell trait. When both parents have sickle-cell trait, a child has a 25% chance of sickle-cell disease, 25% do not carry any sickle-cell alleles, and 50% have the heterozygous condition.

The allele responsible for sickle-cell anemia can be found on the short arm of chromosome 11, more specifically 11p15.5. A person who receives the defective gene from both father and mother develops the disease; a person who receives one defective and one healthy allele remains healthy, but can pass on the disease and is known as a carrier or heterozygote. Several sickle syndromes occur as the result of inheritance of HbS from one parent and another hemoglobinopathy, such as ” thalassemia or HbC (”2”2 6 Glu’Lys), from the other parent. The prototype disease, sickle cell anemia, is the homozygous state for HbS.

PATHOPHYSIOLOGY:

The sickle cell syndromes are caused by mutation in the ”-globin gene that changes the sixth amino acid from glutamic acid to valine. HbS (”2”2 6 Glu’Val) polymerizes reversibly when deoxygenated to form a gelatinous network of fibrous polymers that stiffen the RBC membrane, increase viscosity, and cause dehydration due to potassium leakage and calcium influx. These changes also produce the sickle shape. The loss of red blood cell elasticity is central to the pathophysiology of sickle-cell disease. Sickled cells lose the flexibility needed to traverse small capillaries. They possess altered ‘sticky’ membranes that are abnormally adherent to the endothelium of small venules.

Repeated episodes of sickling damage the cell membrane and decrease the cell’s elasticity. These cells fail to return to normal shape when normal oxygen tension is restored. As a consequence, these rigid blood cells are unable to deform as they pass through narrow capillaries, leading to vessel occlusion and ischaemia.

These abnormalities stimulate unpredictable episodes of microvascular vasoocclusion and premature RBC destruction (hemolytic anemia). The rigid adherent cells clog small capillaries and venules, causing tissue ischemia, acute pain, and gradual end-organ damage. This venoocclusive component usually influences the clinical course.

The actual anaemia of the illness is caused by hemolysis which occurs because the spleen destroys the abnormal RBCs detecting the altered shape of red cells. Although the bone marrow attempts to compensate by creating new red cells, it does not match the rate of destruction. Healthy red blood cells typically function for 90’120 days, but sickled cells only last 10’20 days.

Clinical Manifestations of Sickle Cell Anemia:

Patients with sickling syndromes suffer from hemolytic anemia, with hematocrits from 15 to 30%, and significant reticulocytosis. Anemia was once thought to exert protective effects against vasoocclusion by reducing blood viscosity. The role of adhesive reticulocytes in vasoocclusion might account for these paradoxical effects.

Granulocytosis is common. The white count can fluctuate substantially and unpredictably during and between painful crises, infectious episodes, and other intercurrent illnesses.

Vasoocclusion causes protean manifestations and cause episodes of ischemic pain (i.e., painful crises) and ischemic malfunction or frank infarction in the spleen, central nervous system, bones, joints, liver, kidneys and lungs.

Syndromes cause by sickle hemoglobinopathy:

Painful crises: Intermittent episodes of vasoocclusion in connective and musculoskeletal structures produce ischemia manifested by acute pain and tenderness, fever, tachycardia and anxiety. These episodes are recurrent and it is the most common clinical manifestation of sickle cell anemia. Their frequency and severity vary greatly. Pain can develop almost anywhere in the body and may last from a few hours to 2 weeks.

Repeated crises requiring hospitalization (>3 episodes per year) correlate with reduced survival in adult life, suggesting that these episodes are associated with accumulation of chronic end-organ damage. Provocative factors include infection, fever, excessive exercise, anxiety, abrupt changes in temperature, hypoxia, or hypertonic dyes.

Acute chest syndrome: Distinctive manifestation characterized by chest pain, tachypnea, fever, cough, and arterial oxygen desaturation. It can mimic pneumonia, pulmonary emboli, bone marrow infarction and embolism, myocardial ischemia, or lung infarction. Acute chest syndrome is thought to reflect in situ sickling within the lung, producing pain and temporary pulmonary dysfunction. Pulmonary infarction and pneumonia are the most common underlying or concomitant conditions in patients with this syndrome. Repeated episodes of acute chest pain correlate with reduced survival. Acutely, reduction in arterial oxygen saturation is especially ominous because it promotes sickling on a massive scale. Chronic acute or subacute pulmonary crises lead to pulmonary hypertension and cor pulmonale, an increasingly common cause of death in patients.

Aplastic crisis: A serious complication is the aplastic crisis. This is caused by infection with Parvovirus B-19 (B19V). This virus causes fifth disease, a normally benign childhood disorder associated with fever, malaise, and a mild rash. This virus infects RBC progenitors in bone marrow, resulting in impaired cell division for a few days. Healthy people experience, at most, a slight drop in hematocrit, since the half-life of normal erythrocytes in the circulation is 40-60 days. In people with SCD however, the RBC lifespan is greatly shortened (usually 10-20 days), and a very rapid drop in Hb occurs. The condition is self-limited, with bone marrow recovery occurring in 7-10 days, followed by brisk reticulocytosis.

CNS sickle vasculopathy: Chronic subacute central nervous system damage in the absence of an overt stroke is a distressingly common phenomenon beginning in early childhood. Stroke is especially common in children and may reoccur, but is less common in adults and is often hemorrhagic. Stroke affects 30% of children and 11% of patients by 20 years. It is usually ischemic in children and hemorrhagic in adults.

Modern functional imaging techniques have indicated circulatory dysfunction of the CNS; these changes correlate with display of cognitive and behavioral abnormalities in children and young adults. It is important to be aware of these changes because they can complicate clinical management or be misinterpreted as ‘difficult patient’ behaviors.

Splenic sequestration crisis: The spleen enlarges in the latter part of the first year of life in children with SCD. Occasionally, the spleen undergoes a sudden very painful enlargement due to pooling of large numbers of sickled cells. This phenomenon is known as splenic sequestration crisis. Over time, the spleen becomes fibrotic and shrinks causing autosplenectomy. In cases of SC trait, the spleenomegaly may persist upto adulthood due to ongoing hemolysis under the influence of persistent fetal hemoglobin.

Acute venous obstruction of the spleen a rare occurrence in early childhood, may require emergency transfusion and/or splenectomy to prevent trapping of the entire arterial output in the obstructed spleen. Repeated microinfarction can destroy tissues having microvascular beds, thus, splenic function is frequently lost within the first 18’36 months of life, causing susceptibility to infection, particularly by pneumococci.

Infections: Life-threatening bacterial infections are a major cause of morbidity and mortality in patients with SCD. Recurrent vaso-occlusion induces splenic infarctions and consequent autosplenectomy, predisposing to severe infections with encapsulated organisms (eg, Haemophilus influenzae, Streptococcus pneumoniae).

Cholelithiasis: Cholelithiasis is common in children with SCD as chronic hemolysis with hyperbilirubinemia is associated with the formation of bile stones. Cholelithiasis may be asymptomatic or result in acute cholecystitis, requiring surgical intervention. The liver may also become involved. Cholecystitis or common bile duct obstruction can occur. Child with cholecystitis presents with right upper quadrant pain, especially if associated with fatty food. Common bile duct blockage suspected when a child presents with right upper quadrant pain and dramatically elevated conjugated hyperbilirubinemia.

Leg ulcers: Leg ulcers are a chronic painful problem. They result from minor injury to the area around the malleoli. Because of relatively poor circulation, compounded by sickling and microinfarcts, healing is delayed and infection occurs frequently.

Eye manifestation: Occlusion of retinal vessels can produce hemorrhage, neovascularization, and eventual detachments.

Renal manifestation: Renal menifestations include impaired urinary concentrating ability, defects of urinary acidification, defects of potassium excretion and progressive decrease in glome”rular filtration rate with advancing age. Recurrent hematuria, proteinuria, renal papillary necrosis and end-stage renal disease (ESRD) are all well recognized.

Renal papillary necrosis invariably produces isosthenuria. More widespread renal necrosis leads to renal failure in adults, a common late cause of death.

Bone manifestation: Bone and joint ischemia can lead to aseptic necrosis, common in the femoral or humeral heads; chronic arthropathy; and unusual susceptibility to osteomyelitis, which may be caused by organisms, such as Salmonella, rarely encountered in other settings.

-The hand-foot syndrome is caused by painful infarcts of the digits and dactylitis.

Pregnancy in SCD: Pregnancy represents a special area of concern. The high rate of fetal loss is due to spontaneous abortion. Placenta previa and abruption are common due to hypoxia and placental infarction. At birth, the infant often is premature or has low birth weight.

Other features: Particularly painful complication in males is priapism, due to infarction of the penile venous outflow tracts; permanent impotence may also occur. Chronic lower leg ulcers probably arise from ischemia and superinfection in the distal circulation.

Sickle cell syndromes are remarkable for their clinical heterogeneity. Some patients remain virtually asymptomatic into or even through adult life, while others suffer repeated crises requiring hospitalization from early childhood. Patients with sickle thalassemia and sickle-HbE tend to have similar, slightly milder symptoms, perhaps because of the bad effects of production of other hemoglobins within the RBC.

Clinical Manifestations of Sickle Cell Trait:

Sickle cell trait is often asymptomatic. Anemia and painful crises are rare. An uncommon but highly distinctive symptom is painless hematuria often occurring in adolescent males, probably due to papillary necrosis. Isosthenuria is a more common manifestation of the same process. Sloughing of papillae with urethral obstruction has been also seen, due to massive sickling or sudden death due to exposure to high altitudes or extremes of exercise and dehydration.

Pulmonary hypertension in sickle hemoglobinopathy:

In recent years, PAH a proliferative vascular disease of the lung, has been recognized as a major complication and an independent correlate with death among adults with SCD. Pulmonary hypertension is defined as a mean pulmonary artery pressure >25mmHg, and includes pulmonary artery hypertension, pulmonary venous hypertension or a combination of both. The etiology is multifactorial, including hemolysis, hypoxemia, thromboembolism, chronic high CO, and chronic liver disease. Clinical presentation is characterized by symptoms of dyspnea, chest pain, and syncope. It is important to note that high cardiac output can also elevate pulmonary artery pressure adding to the complex and multifactorial pathophysiology of PHT in sickle cell disease. Thus, if left untreated, the disease carries a high mortality rate, with the most common cause of death being decompensated right heart failure.

Prevalance and prognosis:

Echocardiographic screening studies have suggested that the prevalence of hemoglobinopathy-associated PAH is much higher than previously known. In SCD, approximately one-third of adult patients have an elevated tricuspid regurgitant jet velocity (TRV) of 2.5 m/s or higher, a threshold that correlates in right heart catheterization studies to a pulmonary artery systolic pressure of at least 30 mm Hg. Even though this threshold represents quite mild pulmonary hypertension, SCD patients with TRV above this threshold have a 9- to 10- fold higher risk for early mortality than those with a lower TRV. It appears that the baseline compromised oxygen delivery and co-morbid organ dysfunction of SCD diminishes the physiological reserve to tolerate even modest pulmonary arterial pressures.

Pathogenesis:

Different hemolytic anemias seem to involve common mechanisms for development of PAH. These processes probably include hemolysis, causing endothelial dysfunction, oxidative and inflammatory stress, chronic hypoxemia, chronic thromboembolism, chronic liver disease, iron overload, and asplenia.

Hemolysis results in the release of hemoglobin into plasma, where it reacts and consumes nitric oxide (NO) causing a state of resistance to NO-dependent vasodilatory effects. Hemolysis also causes the release of arginase into plasma, which decreases the concentration of arginine, substrate for the synthesis of NO. Other effects associated with hemolysis that can contribute to the pathogenesis of pulmonary hypertension are increased cellular expression of endothelin, production of free radicals, platelet activation, and increased expression of endothelial adhesion mediating molecules.

Previous studies suggest that splenectomy (surgical or functional) is a risk factor for the development of pulmonary hypertension, especially in patients with hemolytic anemias. It is speculated that the loss of the spleen increases the circulation of platelet mediators and senescent erythrocytes that result in platelet activation (promoting endothelial adhesion and thrombosis in the pulmonary vascular bed), and possibly stimulates the increase in the intravascular hemolysis rate.

Vasoconstriction, vascular proliferation, thrombosis, and inflammation appear to underlie the development of PAH. In long-standing PH, intimal proliferation and fibrosis, medial hypertrophy, and in situ thrombosis characterize the pathologic findings in the pulmonary vasculature. Vascular remodeling at earlier stages may be confined to the small pulmonary arteries. As the disease advances, intimal proliferation and pathologic remodeling progress, resulting in decreased compliance and increased elastance of the pulmonary vasculature.

The outcome is a progressive increase in the right ventricular afterload or total pulmonary vascular resistance (PVR) and, thus, right ventricular work.

Chronic pulmonary involvement due to repeated episodes of acute thoracic syndrome can lead to pulmonary fibrosis and chronic hypoxemia, which can eventually lead to the development of pulmonary hypertension.

Coagulation disorders, such as low levels of protein C, low levels of protein S, high levels of D-dimers and increased activity of the tissue factor, occur in patients with sickle cell anemia.This hypercoagulable state can cause thrombosis in situ or pulmonary thromboembolism, which occurs in patients with sickle cell anemia and other hemolytic anemias.

Clinical manifestations:

On examination, there may be evidence of right ventricular failure with elevated jugular venous pressure, lower extremity edema, and ascites. The cardiovascular examination may reveal an accentuated P2 component of the second heart sound, a right-sided S3 or S4, and a holosystolic tricuspid regurgitant murmur. It is also important to seek signs of the diseases that are often concurrent with PH: clubbing may be seen in some chronic lung diseases, sclerodactyly and telangiectasia may signify scleroderma, and crackles and systemic hypertension may be clues to left-sided systolic or diastolic heart failure.

Diagnostic evaluation:

The diagnosis of pulmonary hypertension in patients with sickle cell anemia is typically difficult. Dyspnea on exertion, the symptom most typically associated with pulmonary hypertension, is also very common in anemic patients. Other disorders with similar symptomatology, such as left heart failure or pulmonary fibrosis, frequently occur in patients with sickle cell anemia. Patients with pulmonary hypertension are often older, have higher systemic blood pressure, more severe hemolytic anemia, lower peripheral oxygen saturation, worse renal function, impaired liver function and a higher number of red blood cell transfusions than do patients with sickle cell anemia and normal pulmonary pressure.

The diagnostic evaluation of patients with hemoglobinopathies and suspected of having pulmonary hypertension should follow the same guidelines established for the investigation of patients with other causes of pulmonary hypertension.

Echocardiography: Echocardiography is important for the diagnosis of PAH and often essential for determining the cause. All forms of PAH may demonstrate a hypertrophied and dilated right ventricle with elevated estimated pulmonary artery systolic pressure. Important additional information can be obtained about specific etiologies such as valvular disease, left ventricular systolic and diastolic function, intracardiac shunts, and other cardiac diseases.

An echocardiogram is a screening test, whereas invasive hemodynamic monitoring is the gold standard for diagnosis and assessment of disease severity.

Pulmonary artery (PA) systolic pressure (PASP) can be estimated by Doppler echocardiography, utilizing the tricuspid regurgitant velocity (TRV). Increased TRV is estimated to be present in approximately one-third of adults with SCD and is associated with early mortality. In the more severe cases, increased TRV is associated with histopathologic changes similar to atherosclerosis such as plexogenic changes and hyperplasia of the pulmonary arterial intima and media.

The cardiopulmonary exercise test (CPET): This test may help to identify a true physiologic limitation as well as differentiate between cardiac and pulmonary causes of dyspnea but test can only be performed if patient has reasonable functional capacity. If this test is normal, there is no indication for a right heart catheterization.

Right Heart Catheterization: If patient has cardiovascular limitation to exercise, a right heart catheterization should be inserted. Right heart catheterization with pulmonary vasodilator testing remains the gold standard both to establish the diagnosis of PH and to enable selection of appropriate medical therapy. The definition of precapillary PH or PAH requires (1) an increased mean pulmonary artery pressure (mPAP ’25 mmHg); (2) a pulmonary capillary wedge pressure (PCWP), left atrial pressure, or left ventricular end-diastolic pressure ’15 mmHg; and (3) PVR >3 Wood units. Postcapillary PH is differentiated from precapillary PH by a PCWP of ’15 mmHg; this is further differentiated into passive, based on a transpulmonary gradient <12 mmHg, or reactive, based on a transpulmonary gradient >12 mmHg and an increased PVR. In either case, the CO may be normal or reduced. If the echocardiogram or cardiopulmonary exercise test (CPET) suggests PH and the diagnosis is confirmed by catheterization.

Chest imaging and lung function tests: These are essential because lung disease is an important cause of PH. A sign of PH that may be evident on chest x-ray include enlargement of the central pulmonary arteries associated with ‘vascular pruning,’ a relative paucity of peripheral vessels. Cardiomegaly, with specific evidence of right atrial and ventricular enlargement may present. The chest x-ray may also demonstrate significant interstitial lung disease or suggest hyperinflation from obstructive lung disease, which may be the underlying cause or contributor to the development of PH.

High-resolution computed tomography (CT): Classic findings of PH on CT include those found on chest x-ray: enlarged pulmonary arteries, peripheral pruning of the small vessels, and enlarged right ventricle and atrium. High-resolution CT may also show signs of venous congestion including centrilobular ground-glass infiltrate and thickened septal lines. In the absence of left heart disease, these findings suggest pulmonary veno-occlusive disease, a rare cause of PAH that can be quite challenging to diagnose.

CT angiograms: Commonly used to evaluate acute thromboembolic disease and have demonstrated excellent sensitivity and specificity for that purpose.

Ventilation-perfusion Ratio: Scanning done for screening because of its high sensitivity and its role in qualifying patients for surgical intervention. Negative ratio virtually rules out CTEPH, some cases may be missed through the use of CT angiograms.

Pulmonary function test: Isolated reduction in DLco is the classic finding in PAH, results of pulmonary function tests may also suggest restrictive or obstructive lung diseases as the cause of dyspnea or PH.

Evaluation of symptoms and functional capacity (6 Min walk test): Although the 6-minute walk test has not been validated in patients with hemoglobinopathies, preliminary data suggest that this test correlates well with maximal oxygen uptake and with the severity of pulmonary hypertension in patients with sickle cell anemia. In addition, in these patients, the distance covered on the 6-minute walk test significantly improves with the treatment of pulmonary hypertension, which suggests that it can be used in this population.

DYSLIPIDEMIA IN SICKLE HEMOGLOBINOPATHY:

Disorders of lipoprotein metabolism are known as ‘dyslipidemias.’ Dyslipidemias are generally characterized clinically by increased plasma levels of cholesterol, triglycerides, or both, accompanied by reduced levels of HDL cholesterol. Mostly all patients with dyslipidemia are at increased risk for ASCVD, the primary reason for making the diagnosis, as intervention may reduce this risk. Patients with elevated levels of triglycerides may be at risk for acute pancreatitis and require intervention to reduce this risk.

Hundreds of proteins affect lipoprotein metabolism and may interact to produce dyslipidemia in an individual patient, there are a limited number of discrete ‘nodes’ that regulate lipoprotein metabolism. These include:

(1) assembly and secretion of triglyceriderich VLDLs by the liver;

(2) lipolysis of triglyceride-rich lipoproteins by LPL;

(3) receptor-mediated uptake of apoB-containing lipoproteins by the liver;

(4) cellular cholesterol metabolism in the hepatocyte and the enterocyte; and

(5) neutral lipid transfer and phospholipid hydrolysis in the plasma.

Hypocholesterolemia and, to a lesser extent, hypertriglyceridemia have been documented in SCD cohorts worldwide for over 40 years, yet the mechanistic basis and physiological ramifications of these altered lipid levels have yet to be fully elucidated. Cholesterol (TC, HDL-C and LDL-C) levels decreased and triglyceride levels increased in relation to severity of anemia. While not true for cholesterol levels, triglyceride levels show a strong correlation with markers of severity of hemolysis, endothelial activation, and pulmonary hypertension.

Decreased TC and LDL-C in SCD has been documented in virtually every study that examined lipids in SCD adults (el-Hazmi, et al 1987, el-Hazmi, et al 1995, Marzouki and Khoja 2003, Sasaki, et al 1983, Shores, et al 2003, Stone, et al 1990, Westerman 1975),

with slightly more variable results in SCD children. Although it might be hypothesized that SCD hypocholesterolemia results from increased cholesterol utilization during the increased erythropoiesis of SCD, cholesterol is largely conserved through the enterohepatic circulation, at least in healthy individuals, and biogenesis of new RBC membranes would likely use recycled cholesterol from the hemolyzed RBCs. Westerman demonstrated that hypocholesterolemia was not due merely to increased RBC synthesis by showing that it is present in both hemolytic and non-hemolytic anemia (Westerman 1975). He also reports that serum cholesterol was proportional to the hematocrit, suggesting serum cholesterol may be in equilibrium with the cholesterol reservoir of the total red cell mass (Westerman 1975). Consistent with such equilibration, tritiated cholesterol incorporated into sickled erythrocytes is rapidly exchanged with plasma lipoproteins (Ngogang, et al 1989). Thus, low plasma cholesterol appears to be a consequence of anemia itself rather than increased RBC production (Westerman 1975).

Total cholesterol, in particular LDL-C, has a well-established role in atherosclerosis. The low levels of LDL-C in SCD are consistent with the low levels of total cholesterol and the virtual absence of atherosclerosis among SCD patients. Decreased HDL-C in SCD has also been documented in some previous studies(Sasaki, et al 1983, Stone, et al 1990). As in lipid studies for other disorders in which HDL-C is variably low, potential reasons for inconsistencies between studies include differences in age, diet, weight, smoking, gender, small sample sizes, different ranges of disease severity, and other diseases and treatments (Choy and Sattar 2009, Gotto A 2003). Decreased HDL-C and apoA-I is a known risk factor for endothelial dysfunction in the general population and in SCD, a potential contributor in SCD to PH, although the latter effect size might be small (Yuditskaya, et al 2009).

In addition, triglyceride levels have been reported to increase during crisis. Why is increased triglyceride but not cholesterol in serum associated with vascular dysfunction and pulmonary hypertension? Studies in atherosclerosis have firmly established that lipolysis of oxidized LDL in particular results in vascular dysfunction. Lipolysis of triglycerides present in triglyceride-rich lipoproteins releases neutral and oxidized free fatty acids that induce endothelial cell inflammation (Wang, et al 2009). Many oxidized fatty acids are more damaging to the endothelium than their non-oxidized precursors; for example, 13-hydroxy octadecadienoic acid (13-HODE) is a more potent inducer of ROS activity in HAECs than linoleate, the nonoxidized precursor of 13-HODE(Wang, et al 2009). Lipolytic generation of arachidonic acid, eicosanoids, and inflammatory molecules leading to vascular dysfunction is a well-established phenomenon (Boyanovsky and Webb 2009). Although LDL-C levels are decreased in SCD patients, LDL from SCD patients is

more susceptible to oxidation and cytotoxicity to endothelium (Belcher, et al 1999) and an unfavorable plasma fatty acid composition has been associated with clinical severity of SCD (Ren, et al 2006). Lipolysis of phospholipids in lipoproteins or cell membranes by secretory phospholipase A2 (sPLA2) family members releases similarly harmful fatty acids, particularly in an oxidative environment (Boyanovsky and Webb 2009 ) and in fact selective PLA2 inhibitors are currently under development as potential therapeutic agents for atherosclerotic cardiovascular disease(Rosenson 2009). Finally, sPLA2 activity has been linked to lung disease in SCD. sPLA2 is elevated in acute chest syndrome of SCD and in conjunction with fever preliminarily appears to be a good biomarker for diagnosis, prediction and prevention of acute chest syndrome(Styles, et al 2000). The deleterious effects of phospholipid hydrolysis on lung vasculature predicts similar deleterious effects of triglyceride hydrolysis, particularly in the oxidatively stressed environment of SCD.

Elevated triglycerides have been documented in autoimmune inflammatory diseases with increased risk of vascular dysfunction and pulmonary hypertension, including systemic lupus erythematosus, scleroderma, rheumatoid arthritis, and mixed connective tissue diseases(Choy and Sattar 2009, Galie, et al 2005). In fact, triglyceride concentration is a stronger predictor of stroke than LDL-C or TC(Amarenco and Labreuche 2009). Even in healthy control subjects, a high-fat meal induces oxidative stress and inflammation, resulting in endothelial dysfunction and vasoconstriction(O’Keefe, et al 2008). Perhaps having high levels of plasma triglycerides promotes vascular dysfunction, with the clinical outcome of vasculopathy mainly in the coronary and cerebral arteries in the general population, and with more targeting to the pulmonary vascular bed in SCD and autoimmune diseases.

The mechanisms leading to hypocholesterolemia and hypertriglyceridemia in plasma or serum of SCD patients are not completely understood. In normal individuals, triglyceride levels are determined to a significant degree by body weight, diet and physical exercise, as well as concurrent diabetes. Diet and physical exercise very likely impact body weight and triglyceride levels in SCD patients also. These findings indicate that standard risk factors for high triglycerides are also relevant to SCD patients. Mechanisms of SCD-specific risk factors for elevated plasma triglycerides are not as clear. RBCs do not have de novo lipid synthesis (Kuypers 2008). In SCD the rate of triglyceride synthesis from glycerol is elevated up to 4-fold in sickled reticulocytes (Lane, et al 1976), but SCD patients have defects in post absorptive plasma homeostasis of fatty acids (Buchowski, et al 2007). Lipoproteins and albumin in plasma can contribute fatty acids to red blood cells for incorporation into membrane phospholipids (Kuypers 2008), but RBC membranes are not triglyceride-rich and contributions of RBCs to plasma triglyceride levels have not been described. Interestingly, chronic intermittent or stable hypoxia just by exposure to high altitudes, with no underlying disease, is sufficient to increase triglyceride levels in healthy subjects (Siques, et al 2007). Thus, it has also been suggested that hypoxia in SCD may contribute at least partially to the observed increase in serum triglyceride. Finally, there is a known link of low cholesterol and increased triglycerides that occurs in any primate acute phase response, such as infection and inflammation (Khovidhunkit, et al 2004). Perhaps because of their chronic hemolysis, SCD patients have a low level of acute phase response, which is also consistent with the other inflammatory markers. Further studies are required to elucidate the mechanisms leading to hypocholesterolemia and hypertriglyceridemia in SCD.

Pulmonary hypertension is a disease of the vasculature that shows many similarities with the vascular dysfunction that occurs in coronary atherosclerosis (Kato and Gladwin 2008). The similarities and differences are: They both have proliferative vascular smooth muscle cells ‘ just in different vascular beds. They both have an impaired nitric oxide axis, increased oxidant stress, and vascular dysfunction. Most importantly, serum triglyceride levels, previously linked to vascular dysfunction, are definitely shown to correlate with NT-proBNP and TRV and thus, with pulmonary hypertension. Moreover, triglyceride levels are predictive of TRV independent of systolic blood pressure, low transferrin or increased lactate dehydrogenase.

PAH in SCD is also characterized by oxidant stress but in SCD patients plasma total cholesterol (TC) and low density lipoprotein cholesterol (LDL-C) are low. There have been some reports of low HDL cholesterol (HDL-C)17,18 and increased triglyceride in SCD patients ‘ features widely recognized as important contributory factors in cardiovascular disease. These findings and the therapeutic potential to modulate serum lipids with several commonly used drugs prompted us to investigate in greater detail the serum lipid profile in patients with sickle hemoglobinopathy (SH) coming to our hospital and its possible relationship to vasculopathic complications such as PAH.

essay-2016-09-27-000BaY

Gender and Caste – The Cry for Identity of Women

INTRODUCTION

‘Bodies are just not biological phenomena but a complex social creation onto which meanings have been variously composed and imposed according to time and space’. These social creations differentiate the two biological personalities into Man and Woman and meanings to their qualities are imposed on the basis of gender which defines them as He and She.

The question then arises a woman ‘ who is she? According to me, a woman is the one who is empowered, enlightened, enthusiastic and energetic. A woman is all about sharing. She is an exceptional personality who encourages and embraces. If a woman is considered to be a mark of patience and courage then why even today there is a lack of identity in her personality. She is subordinated to man and often discriminated on gender basis.

The entire life of a woman revolves around the patriarchal existence as she is dominated by her father in the childhood, in the other phase of her life she is dominated by her husband and in the later phase by her son, which gives no space to her own independence.

The psychological and physical identity of a woman is defined through the role and control of men: the terrible trait of father-husband-son. The boundary of women is always restrained by male dominance. Gender discrimination is not only a historical concept but it still has its existence in the contemporary Indian Society.

Indian society in every part of its existence experiences the ferocious gender conflict which is everyday projected in the daily newspapers, news channels or even walking on the streets of Indian society. The horror of patriarchal domination exists in every corner of the Indian society. The role of Indian women has always been declining over the centuries.

Turning the pages of history, in the pre-Aryan India God was female and life was being represented in the form of mother Earth. People worshipped the mother Goddess for fertility symbols. The Shakti cult of Hinduism says women as the source and embodiment of cosmic power and energy. Woman power can also be shown through Goddess Durga who lured her husband Shiva from asceticism.

The religious and social condition abruptly changed when the Aryan Brahmins eliminated the Shakti cult and power was given in the hands of male group. They considered the male deities as the husbands of the female goddess providing the dominance in the hands of the male. Marriage was involvement of male control over female sexuality. Even the identity of mother goddess was dominated by the male gods. As Mrinal Pande writes, ‘to control women, it becomes necessary to control the womb and so Hinduism, Judaism, Islam and Christianity have all Stipulated, at one time or another, that the whole area of reproductive activity must be firmly monitored by law and lawmakers’ .

The issue of identity crisis for a woman

The identity of a woman is erased as she becomes a mere reproductive machine ruled and dominated by male laws. From the time she takes birth she is taught that one day, she has to get married and go to her husband’s house. Neither thus she belongs to her own house nor to her husband’s house leaving a mark on her identity. The Vedic times, however proved to be a boon in the lives of women as they enjoyed freedom of choice in aspect of husbands and could marry at mature age. Widows could remarry and women could divorce.

The segregation of women continued to raise the same question of identity as in the Chandogya Upanishad, a religious text of the pre-Buddhist era, contains a prayer of spiritual aspirants which says ‘May I never, ever, enter that reddish, white, toothless, slippery and slimy yoni of the woman’. During this time control over women included reclusion and exclusion and they were even denied education. Women and shudras were treated as the minority class in the society. Rights and privileges given to women were cancelled and girls were married at a very early age. Caste structure also played a great role as women were now discriminated within their own caste on gender basis.

According to Liddle, women were controlled under two aspects: firstly, they were disinherited from ancestral property, economy and were expected to remain under the domestic sphere known as purdah. The second aspect was the control of men over female sexuality. The death rituals of the family members were performed by the sons and no daughter had the right to fire their parent funeral.

A stifling patriarchal shadow hangs over the lives of ladies all through India. From all areas, ranks and classes of society, ladies are casualty of its oppressive, controlling impacts. Those subjected to the heaviest weight of separation are from the Dalit or “Planned Castes”, referred to in less liberal vote based times as the “Untouchables”. The name may have been banned however pervasive negative mentalities of psyche stay, as do the amazing levels of misuse and subjugation experienced by Dalit ladies. They encounter different levels of segregation and misuse, a lot of which is primitive, debasing, horrifyingly vicious and absolutely obtuse. The divisive position framework ‘ in operation all through India, “Old” and “New” ‘ together with biased sexual orientation demeanors, sits at the heart of the colossal human rights manhandle experienced by Dalit or “outcaste” ladies.

The lower positions are isolated from different individuals from the group, precluded from eating with “higher” standings, from utilizing town wells and lakes, entering town sanctuaries and higher rank houses, wearing shoes or notwithstanding holding umbrellas before higher stations; they are compelled to sit alone and use distinctive porcelain in eateries, restricted from cycling a bike inside their town and are made to cover their dead in a different cemetery. They every now and again confront ousting from their territory by higher “overwhelming” stations, compelling them to live on the edges of towns frequently on fruitless area.

This plenty of preference add up to politically-sanctioned racial segregation, and the time has come ‘ long past due ‘ that the “popularity based” legislature of India authorized existing enactment and cleansed the nation of the guiltiness of position and sexual orientation based separation and abuse.

The strategic maneuver of patriarchy soaks each range of Indian culture and offers ascend to an assortment of unfair practices, for example, female child murder, victimization young ladies and shares related passing. It is a noteworthy reason for misuse and manhandle of ladies, with a lot of sexual brutality being executed by men in positions of force. These reach from higher position men damaging lower rank ladies, particularly Dalit; policemen abusing ladies from poor family units; and military men mishandling Dalit and Adivasi ladies in rebellion states, for example, Kashmir, Chhattisgarh, Jharkhand, Orissa and Manipur. Security faculty are ensured by the generally condemned Armed Forces Special Powers Act, which gifts exemption to police and individuals from the military completing criminal demonstrations of assault and to be sure murder; it was proclaimed by the British in 1942 as a crisis measure, to stifle the Quit India Movement. It is an unreasonable law, which needs canceling.

In December 2012 the intolerable posse assault and mutilation of a 23-year-old paramedical understudy in New Delhi, who consequently kicked the bucket from her wounds, collected overall media consideration, putting a transient focus on the risks, persecution and shocking treatment ladies in India confront each day. Assault is endemic in the nation. With most instances of assault going unreported and numerous being released by police, the genuine figure could be 10 times this. The ladies most at danger of misuse are Dalit: the NCRB gauges that more than four Dalit-ladies are assaulted each day in India. An UN study uncovers that “the lion’s share of Dalit ladies report having confronted one or more episodes of verbal misuse (62.4 for every penny), physical attack (54.8 for each penny), inappropriate behavior and strike (46.8 for each penny), aggressive behavior at home (43.0 for every penny) and assault (23.2 for every penny)”. They are subjected to “assault, attack, seizing, snatching, crime physical and mental torment, shameless movement and sexual misuse.”

The UN found that extensive numbers were deterred from looking for equity: in 17 for each penny of occasions of savagery (counting assault) casualties were blocked from reporting the wrongdoing by the police; in more than 25 for each penny of cases the group ceased ladies recording grumblings; and in more than 40 for each penny ladies “did not endeavor to get legitimate or group solutions for the brutality basically out of apprehension of the culprits or social disrespect if (sexual) viciousness was uncovered”. In just 1 for every penny of recorded cases were the culprits sentenced. What “takes after episodes of viciousness”, the UN found, is “a resonating hush”. The impact with regards to Dalit ladies particularly, however not solely, “is the creation and upkeep of a society of brutality, quiet and exemption”.

Class discrimination faced by women of contemporary time

The Indian constitution clarifies the “rule of non-separation on the premise of rank or sexual orientation”. It promises the “privilege to life and to security of life”. Article 46 particularly “shields Dalit from social unfairness and all types of abuse”. Add to this the imperative Scheduled Castes and Tribes (Prevention of Atrocities) Act of 1989, and an around outfitted administrative armed force is framed. Notwithstanding, in view of “low levels of execution”, the UN expresses, “the procurements that secure ladies’ rights must be viewed as vacant of importance”. It is a commonplace Indian story: legal impassion (and cost, absence of access to lawful representation, interminable formality and obstructive staff), police defilement, and government arrangement, in addition to media lack of interest bringing on the significant hindrances to equity and the perception and implementation of the law.

Not at all like white collar class young ladies, Dalit assault casualties (whose numbers are developing) once in a while get the consideration of the rank/class-cognizant urban-driven media, whose essential concern is to advance a Bollywood gleaming, open-for-business picture of the nation.

A 20-year-old Dalit lady from the Santali tribal gathering in West Bengal was group assaulted, supposedly “on the requests of town senior citizens who questioned her relationship (which had been going ahead in mystery for a long time) with a man from an adjacent town in the Bird hum locale”. The savage occurrence happened while, as indicated by a BBC report, the man went to the lady’s home’ with the proposition of marriage, villagers spotted him and sorted out a kangaroo court. Amid the “procedures” the couple were made to sit with situation is anything but hopeful’ the headman of the lady’s town fined the couple 25,000 rupees (400 US dollars; GBP 240) for “the wrongdoing of experiencing passionate feelings for. The man paid, however the lady’s family were not able pay. Subsequently, the “headman” and 12 of his companions more than once assaulted her. Brutality, abuse and prohibition are utilized to keep Dalit ladies in a position of subordination and to keep up the patriarchal grasp on force all through Indian culture.

The urban areas are unsafe spots for ladies, yet it is in the farmland, where a great many people live (70 for each penny) that the best levels of misuse happen. Numerous living in country zones live in amazing neediness (800 million individuals in India live on under 2.50 dollars a day), with practically no entrance to medicinal services, poor instruction and horrifying or non-existent sanitation. It is a world separated from law based Delhi, or Westernized Mumbai: water, power, majority rule government and the tenet of law are yet to venture into the lives of the ladies in India’s towns, which home, Mahatma Gandhi broadly proclaimed, to the spirit of the nation.

Nothing unexpected, then, that following two many years of monetary development, India winds up moping 136th (of 186 nations) in the (sex fairness balanced) United Nations Human Development record’ Harsh thoughts of sexual orientation imbalance

Indian culture is isolated in numerous ways: position/class, sexual orientation, riches and neediness, and religion. Dug in patriarchy and sex divisions, which esteem young men over young ladies and keep men and ladies and young men and young ladies separated, join with tyke marriage to add to the formation of a general public in which sexual misuse and abuse of ladies, especially Dalit ladies, is an adequate piece of ordinary life.

Sociologically and mentally molded into division, schoolchildren separate themselves along sex lines; in numerous territories ladies sit on one side of transports, men another; unique ladies just carriages have been introduced on the Delhi and Mumbai metro, acquainted with shield ladies from inappropriate behavior or “eve teasing” as it is conversationally known. Such wellbeing measures, while being invited by ladies and ladies’ gatherings, don’t manage the basic reasons for misuse, and as it were may promote kindle them.

Assault, sexual brutality, attack and provocation are overflowing, at the same time, with the special case maybe of the Bollywood Mumbai set, sex is a forbidden subject. A survey by India Today directed in 2011 found that 25 for every penny of individuals had no complaint to sex before marriage, giving it’s not in their family.

Sociological partition energizes sex divisions, bolsters biased generalizations and feeds sexual constraint, which numerous ladies’ association trust represents the high rate of sexual viciousness. A recent report, did by the International Center for Research on Women, of men’s mentalities in India towards ladies created some startling measurements: one in four conceded having “utilized sexual brutality (against an accomplice or against any lady)”, one in five reported utilizing “sexual savagery against a stable [female] accomplice”. Half of men would prefer not to see sexual orientation correspondence, 80 for each penny respect evolving nappies, nourishing and washing youngsters to be “ladies’ work”, and a minor 16 for every penny have influence in family obligations. Added to these repressing states of mind of psyche, homophobia is the standard, with 92 for every penny admitting they would be embarrassed to have a gay companion, or even be in the region of a gay man.

With everything taken into account, India is cursed by an inventory of Victorian sex generalizations, fuelled by a position framework intended to oppress, which trap both men and ladies into molded cells of detachment where ruinous thoughts of sex are permitted to age, bringing about blasts of sexual brutality, misuse and man handle. Investigations of position have started to draw in with issues of rights, assets, and acknowledgment/representation, showing the degree to which position must be perceived as key to the account of India’s political advancement. For instance, researchers are getting to be progressively mindful of the degree to which radical masterminds.

Ambedkar, Periyar, and Phule requested the acknowledgment of histories of misuse, custom derision, and political disappointment as constituting the lives of the lower-ranks, even all things considered histories additionally framed the loaded past from which get away was looked for.

Researchers have indicated Mandal as the developmental minute in the “new” national governmental issues of station, particularly for having radicalized dalitbahujans in the politically critical states of the Hindi belt. Hence Mandal may be an advantageous, despite the fact that overdetermined vantage-indicate from which break down the state’s conflicting and incapable interest in the talk of lower-rank qualification, tossing open to examination the political practices and philosophies that enliven parliamentary vote based system in India as a recorded arrangement.

Tharu and Niranjana (1996) have noticed the perceivability of station also, sexual orientation issues in the post-Mandal connection and depict it as a opposing arrangement. Case in point, there were battles by upper-station ladies to challenge reservations by comprehension them as concessions, and the extensive scale investment of school going ladies in the counter Mandal tumult with a specific end goal to claim meet treatment instead of reservations in battles for sexual orientation equality. On the other hand, lower-position male declaration regularly focused on uppercaste ladies, making an uncertain problem for upper-rank women’s activists who had been professional Mandal. The relationship between standing and sexual orientation never appeared to be more cumbersome. The interest for bookings for ladies (and for further reservations for dalit ladies and ladies from the Backward Class and Other Backward Communities) can likewise be seen as an outgrowth of a restored endeavor to address rank and sex issues from inside the landscape of governmental issues. It may likewise demonstrate the inadequacy of concentrating exclusively on sexual orientation in assembling a measurable “arrangement” to the political issue of perceivability and representation.

Rising out of the 33 for each penny bookings for ladies in nearby Panchayat, and plainly inconsistent with the Mandal dissents that compared reservations with ideas of inadequacy, the late requests for reservations is a stamped move far from the verifiable doubt of bookings for ladies. As Mary John has contended, ladies’ powerlessness must be seen with regards to the political removals t h at imprint the emergence of minorities before the state.

The subject of political representation and the plan of gendered defenselessness are associated issues. As I have contended in my exposition incorporated into this volume, such defenselessness is the characteristic of the gendered subject’s peculiarity. It is that type of harmed presence that brings her inside the edge of political readability as various’yet qualified’for general types of review. All things considered, it is basic to political talks of rights and acknowledgment.

Political requests for bookings for ladies’and for lowercaste ladies’supplement academic endeavors to comprehend the profound cleavages between ladies of various positions that contemporary occasions, for example, Mandal or the Hindutva development have uncovered. In investigating the difficulties postured by Mandal to ruling originations of mainstream selfhood, Vivek Dhareshwar indicated conversions between perusing for and recouping the nearness of position as a hushed open talk in contemporary India, and comparable practices by women’s activists who had investigated the unacknowledged weight of gendered personality.

Dhareshwar recommended that scholars of station and scholars of sex may consider elective affinities in their strategies for examination, and deliberately grasp their trashed personalities (position, sexual orientation) with a specific end goal to attract open thoughtfulness regarding them as political characters. Dhareshwar contended this would demonstrate the degree to which secularism had been kept up as another type of upper-rank benefit, the extravagance of overlooking standing, rather than the requests for social equity by dalitbahujans who were requesting an open affirmation of such benefit.

Women and dalit considered the same

Untouchability and Dalit Ladies’ Oppression,” that “It remains a matter of reflection that the individuals who have been effectively required with arranging ladies experience troubles that are no place tended to in a hypothetical writing whose foundational standards are gotten from a sprinkling of standardizing hypotheses of rights, liberal political hypothesis, a not well educated left governmental issues and all the more as of late, every so often, even a well meaning convention of’entitlements.’ Malik in impact requests that how we are comprehend dalit ladies’ defenselessness.

Rank relations are implanted in dalit ladies’ significantly unequal access to assets of essential survival, for example, water and sanitation offices, and in addition to instructive foundations, open spots, and destinations of religious love. Then again, the material impoverishment of dalits and their political disappointment propagate the typical structures of untouchability, which legitimates upper-station sexual access to dalit ladies. Station relations are likewise changing, and new types of viciousness in autonomous India that objective images of dalit freedom such as the defilement of the statues of dalit pioneers, endeavor to counteract dalits’ socio-political progression by dispossessing land, or deny dalits of their political rights are gone for dalits’ apparent social versatility. These fresher types of brutality are regularly supplemented by the sexual harrassment and attack of dalit ladies, indicating the rank and gendered types of helplessness that dalit ladies experience.

As Gabriele Dietrich notes in her exposition “Dalit Movements and Women’s Movements,”* dalit ladies have been focuses of upper-position savagery. In the meantime, dalit ladies have likewise worked as the “property” of dalit men. Lowercaste men are likewise occupied with an unpredictable arrangement of dreams of requital that include the sexual infringement of upper-station ladies in striking back for their weakening by rank society. The risky organization of dalit ladies as sexual property in both occurrences overdetermines dalit ladies’ character in wording exclusively of their sexual accessibility.

Young ladies: Household Servants

At the point when a kid is conceived in most creating nations, companions and relatives shout congrats. A child implies protection. He will acquire his dad’s property and land a position to bolster the family. At the point when a young lady is conceived, the response is altogether different. A few ladies sob when they discover their infant is a young lady on the grounds that, to them, a girl is simply one more cost. Her place is in the home, not in the realm of men. In some parts of India, it’s conventional to welcome a family with an infant young lady by saying, “The worker of your family has been conceived.”

A young lady can’t resist the urge to feel second rate when everything around her advises her that she is worth not exactly a kid. Her character is fashioned when her family and society confine her chances and proclaim her to be inferior.

A blend of amazing neediness and profound inclinations against ladies makes a callous cycle of separation that keeps young ladies in creating nations from satisfying their maximum capacity. It additionally abandons them helpless against extreme physical and psychological mistreatment. These “hirelings of the family” come to acknowledge that life will never be any diverse.

Most prominent Obstacles Affecting Girls

Oppression young ladies and ladies in the creating scene is an overwhelming reality. It results in a huge number of individual tragedies, which signify lost potential for whole nations. Contemplates show there is an immediate connection between a nation’s disposition toward ladies and its encouraging socially and financially. The status of ladies is fundamental to the strength of a general public. On the off chance that one section endures, so does the entirety.

Grievously, female kids are most exposed against the injury of sexual orientation separation. The accompanying impediments are stark case of what young ladies overall face. However, the uplifting news is that new eras of young ladies speak to the most encouraging wellspring of progress for ladies’and men’in the creating scene today.

Endowment

In creating nations, the introduction of a young lady causes awesome change for poor families. At the point when there is scarcely enough nourishment to survive, any tyke puts a strain on a family’s assets. Be that as it may, the financial channel of a little girl feels considerably more serious, particularly in areas where endowment is drilled.

Endowment is merchandise and cash a lady of the hour’s family pays to the spouse’s family. Initially planned to help with marriage costs, share came to be seen as installment to the man of the hour’s family to take on the weight of another lady. In a few nations, endowments are indulgent, costing years of wages, and regularly tossing a lady’s family into obligation. The settlement hone makes the possibility of having a young lady considerably more offensive to poor families. It likewise puts young ladies in threat: another lady is helpless before her in-laws if they choose her settlement is too little. UNICEF gauges that around 5,000 Indian ladies are executed in settlement related occurrences every year.

Disregard

The creating scene is brimming with neediness stricken families who see their girls as a monetary problem. That state of mind has brought about the across the board disregard of child young ladies in Africa, Asia, and South America. In numerous groups, it’s a standard practice to breastfeed young ladies for a shorter time than young men so ladies can attempt to get pregnant again with a kid at the earliest opportunity. Subsequently, young ladies pass up a great opportunity for nurturing nourishment amid an essential window of their advancement, which hinder their development and debilitates their imperviousness to sickness.

Measurements demonstrate that the disregard proceeds as they grow up. Young ladies get less sustenance, medicinal services and less inoculations generally than young men. Very little changes as they get to be ladies. Convention calls for ladies to eat last, regularly decreased to picking over the scraps from the men and young men.

Child murder and Sex-Selective Abortion

In compelling cases, guardians settle on the terrible decision to end their infant young lady’s life. One lady named Lakshmi from Tamil Nadu, a ruined area of India, nourished her child sap from an oleander bramble blended with castor oil until the young lady seeped from the nose and kicked the bucket. “A little girl is dependably liabilities. By what method would I be able to raise a second?” said Lakshmi to disclose why she finished her child’s life. “Rather than her affliction the way I do, I thought it was ideal to dispose of her.”

Sex-specific premature births are much more regular than child murders in India. They are developing always visit as innovation makes it straightforward and shabby to decide an embryo’s sex. In Jaipur, a Western Indian city of 2 million individuals, 3,500 sex-decided premature births are completed each year. The sex proportion crosswise over India has dropped to an unnatural low of 927 females to 1,000 guys because of child murder and sex-based premature births.

China has its own particular long legacy of female child murder. In the most recent two decades, the administration’s notorious one-kid strategy has debilitated the nation’s reputation considerably more. By confining family unit size to restrict the populace, the approach gives guardians only one opportunity to create a desired child before being compelled to pay overwhelming fines for extra youngsters. In 1997, the World Health Organization proclaimed, “‘ more than 50 million ladies were evaluated to miss in China as a result of the standardized slaughtering and disregard of young ladies because of Beijing’s populace control program.” The Chinese government says that sex-specific premature birth is one noteworthy clarification for the amazing number of Chinese young ladies who have just vanished from the populace in the most recent 20 years.

Misuse

Indeed, even after outset, the risk of physical mischief takes after young ladies for the duration of their lives. Ladies in each general public are helpless against misuse. Be that as it may, the danger is more extreme for young ladies and ladies who live in social orders where ladies’ rights mean for all intents and purposes nothing. Moms who do not have their own particular rights have little assurance to offer their girls, a great deal less themselves, from male relatives and other power figures. The recurrence of assault and vicious assaults against ladies in the creating scene is disturbing. Forty-five percent of Ethiopian ladies say that they have been struck in their lifetimes. In 1998, 48 percent of Palestinian ladies confessed to being manhandled by a personal accomplice inside the previous year.

In some societies, the physical and mental injury of assault is aggravated by an extra shame. In societies that keep up strict sexual codes for ladies, if a lady ventures too far out’by picking her own significant other, being a tease in broad daylight, or looking for separation from an injurious accomplice’she has conveyed disrespect to her family and must be restrained. Regularly, teach implies execution. Families submit “honor killings” to rescue their notoriety polluted by defiant ladies.

Shockingly, this “insubordination” incorporates assault. In 1999, a 16-year-old rationally disabled young lady in Pakistan who had been assaulted was brought before her tribe’s legal guidance. Despite the fact that she was the casualty and her aggressor had been captured, the guidance chose she had conveyed disgrace to the tribe and requested her open execution. This case, which got a ton of reputation at the time, is not uncommon. Three ladies succumb to respect killings in Pakistan consistently’including casualties of assault. In zones of Asia, the Middle East, and even Europe, all obligation regarding sexual wrongdoing falls, as a matter of course, to ladies.

Work

For the young ladies who get away from these pitfalls and grow up moderately securely, day by day life is still unfathomably hard. School may be a possibility for a couple of years, however most young ladies are hauled out at age 9 or 10 when they’re sufficiently helpful to work throughout the day at home. Nine million a bigger number of young ladies than young men pass up a major opportunity for school each year, as indicated by UNICEF. While their siblings keep on going to classes or seek after their leisure activities and play, they join the ladies to do the main part of the housework.

Housework in creating nations comprises of persistent, troublesome physical work. A young lady is prone to work from before dawn until the light depletes away. She strolls unshod long separations a few times each day conveying overwhelming pails of water, undoubtedly contaminated, just to keep her family alive. She cleans, grinds corn, accumulates fuel, tends to the fields, washes her more youthful kin, and gets ready suppers until she takes a seat to her own after every one of the men in the family have eaten. Most families can’t manage the cost of current machines, so her undertakings must be finished by hand’squashing corn into dinner with substantial rocks, cleaning clothing against harsh stones, plying bread and cooking gruel over a rankling open flame. There is no time left in the day to figure out how to peruse and compose or to play with companions. She falls depleted every night, prepared to get up the following morning to begin another long workday.

The greater part of this work is performed without acknowledgment or prize. UN measurements demonstrate that despite the fact that ladies create a large portion of the world’s sustenance, they possess just 1 percent of its farmland. In most African and Asian nations, ladies’ work isn’t viewed as genuine work. Should a lady accept an occupation, she is relied upon to keep up every one of her obligations at home notwithstanding her new ones, with no additional assistance. Ladies’ work goes neglected, despite the fact that it is urgent to the survival of every family.

Sex Trafficking

A few families choose it’s more lucrative to send their girls to a close-by town or city to land positions that more often than not include hard work and little pay. That urgent requirement for money leaves young ladies simple prey to sex traffickers, especially in Southeast Asia, where universal tourism pigs out the illicit business. In Thailand, the sex exchange has swelled without register with a primary part of the national economy. Families in little towns along the Chinese fringe are consistently drawn nearer by scouts called “close relatives” who request their girls in return for a long time’s wages. Most Thai agriculturists win just $150 a year. The offer can be excessively enticing, making it impossible to can’t.

essay-2016-06-15-000BHg

Would it be moral to legalise Euthanasia in the UK?: essay help online

The word ‘morality’ seems to be used in both descriptive and normative meanings. More particularly, the term “morality” can be used either (Stanford Encyclopaedia of Philosophy https://plato.stanford.edu/entries/morality-definition

1. descriptively: referring to codes of conduct advocated by a society or a sub-group (e.g. a religion or social group), or adopted by an individual to justify their own beliefs,

or

2. normatively: describing codes of conduct that in specified conditions, should be accepted by all rational members of the group being considered.

Examination of ethical theories applied to Euthanasia

Thomas Aquinas’ natural law considered that morally beneficial actions and the goodness of those actions is assessed against eternal law as a reference point. Eternal law, in his view, is a higher authority and the process of reasoning defines the differences between right and wrong. Natural law thinking is not just concerned with focussed aspects, but considers the whole person and their infinite future. Aquinas would have linked this to God’s predetermined plan for that individual and heaven. The morality of Catholic belief is heavily influenced by natural law. Primary precepts should be considered when considering issues involving euthanasia particularly important key precepts to do good and oppose evil and to preserve life upholding the sanctity of life. Divine law set out in the Bible states that we are created in God’s image and held together by God from our time in the womb. The Catholic Church’s teachings on euthanasia maintain that euthanasia is wrong (Pastoral Constitution, Gaudium et Spes no. 27, 1965) as life is sacred and God-given. (Declaration on Euthanasia 1980). This view can be seen to be just as strongly held and applied today in the very recent case of Alfie Evans where papal intervention in the case was significant and public. Terminating life through euthanasia goes against divine law. Ending life and the possibility of that life bringing love into the world or love coming into the world in response to the person euthanised is wrong. To take a life by euthanasia, according to catholic belief, rejects God’s plan for that individual to live their life. Suicide or intentionally ending life is an equal wrong to murder and as such is to be considered rejection is God’s loving plan (Declaration on euthanasia, 1.3, 1980).

The Catholic Church interprets natural law to mean euthanasia is wrong and that those involved in it are committing a wrongful and sinful act. Whilst the objectives of euthanasia may appear to be good in that they seek to ease suffering and pain they are in fact failing to recognise the greater good of the sanctity of life within God is greater plan and include people other and the person suffering and eternal life in heaven

The conclusions of natural law consider the position of life in general and not just the ending of a single life. An example would be that if euthanasia is lawful older people could become fearful of admission to hospital in case they were drawn into euthanasia. It could also lead to people being attracted to euthanasia at times when they were depressed. This can be seen to attack the principles of living well together in society as good people could be hurt. It also makes some predictions on the slippery slope and floodgates type arguments about hypothetical situations. Euthanasia therefore clearly undermines some primary precepts.

Catholicism accepts the disproportionately onerous treatment is not appropriate towards the end of a person’s life and gives a moral obligation not to strenuously keep a person alive at all costs. An example of this would be the terminally ill cancer patient deciding not to accept further chemotherapy or radiotherapy which could extend their life, but at great cost to quality of that remaining life. Natural law does not seem to prevent them from making these kinds of choices.

There is a doctrine of double effect an example being palliative care with the relief of pain and distress as the objective might have a secondary effect of ending life earlier than if more active treatment options had been pursued. The motivation is not to kill, but rather to ease pain and distress. An example of this is when an individual doctor’s decision to increase opiate drug dosage to the point where respiratory arrest occurs almost inevitably but at all times the intended motivation is the easing of pain and distress. This has on various occasions been upheld as being legally and morally acceptable by the courts and medical watchdogs such as the GMC (General Medical Council).

The catechism of the Catholic Church accepts this and view such decisions as best made by the patient if competent and able and if not by those legally and professionally entitled to act for the individual concerned.

There are other circumstances when the person involved in the process might not be the same type of person as is assumed by natural law. For example, someone with severe brain damage and in a persistent coma or “brain-dead”. In these situations, they may not possess the defining characteristics of a person. This could form justification for euthanasia. The doctors or relatives caring for such a patient may have conflicts of conscience by being unable to show compassion to another and thereby prolong suffering, not only of the patient, but of those surrounding them.

In his book Morals and Medicine published in 1954, Fletcher, the president of the euthanasia Society of America argued that there were no absolute standards of morality in medical treatment and that good ethics demand consideration of patient’s condition and the situation surrounding it.

Fletcher Situation Ethics avoids legalistic consideration of moral decisions. It is anchored only actual situations and specifically in unconditional love for the care of others. When considering euthanasia with this approach it will always “depend upon the situation”.

From the view point of an absolutist, morality is innate from birth. It can be argued that natural law does not change as a result of personal opinions; remaining never changed. Natural law is a positive view with regard to morality as it can be seen to allow people from ranging backgrounds, classes and situations to have sustainable moral laws to follow.

Religious believers also follow the principles of Natural Law as the underlying theology of the law argues the idea that morality remains the same and never changes with an individual’s personal opinions or decisions. Christianity as a religion, has great support amongst its religious believers for there being a natural law of morality. Christian understanding behind this concept has been largely shown to have come as a result of Thomas Aquinas- following his teaching of the close connection of faith and reason being closely related arguments for there being a natural law of morality.

Natural Law has been shown over time to have compelling arguments, one of which being its all-inclusiveness and fixed stature- a contrast to the relative approach to morality. Natural law is objective and is consequently abiding and eternal. It is considered to be within us/innate and is seen to occur as a mixture of faith and reason to go on the form an intelligent and rational being who is faithful in belief of God. Natural law is a part of human nature, commencing from the beginning of our lives when we gain our sense of right and wrong.

However, there are also many disadvantages of natural law with regard to resolving moral problems. They can include, the fact that they are not always self-evident (proving). We are unable to confirm whether there is only one global purpose for humanity. It can be argued that even if humanity had a purpose for its existence, this purpose cannot be seen as self-evident. The perception of natural beings and things is forced to change over generations due to different perceptions, with forms of different times being more fitting with the present culture. It can therefore be argued that absolute morality is changed and altered by cultural beliefs of right and wrong. Some things later on in time being perceived as wrong, leading on to believe that defining what is natural is almost impossible as moral decisions are ever changing. The thought of actuality being better that potentiality, cannot easily transfer to practical ethics. The future holds many potential outcomes, however some of these potential outcomes are ‘wrong’. (Hodder Education, 2016)

Natural law being the best way to resolve moral problems holds a strong argument, however its strict formation means that there is some confusion as to what is right and wrong in certain situations. These views are instead formed by society- not always following the natural law of morality. Darwin’s Theory of Evolution put forward in On The Origin of the Species in 1859, challenged natural law as he put forward the notion that living things strive for survival (survival of the fittest) and supporting his theory of evolution by natural selection. It can be argued that moral problems being solved by natural law may be possible, but not necessarily the best solution.

For many years, euthanasia has been a controversial debate across the globe with different people taking opposing sides and arguing in support of their opinions. Ideally, it is the act of allowing an individual to die in a painless manner by suppressing their medication. Often, these are classified in different forms such as voluntary, involuntary and non-voluntary. However, the legal system has been actively involved in this debate. A major concern put forward is that legalizing any form of euthanasia may lead to slippery slope principle, which holds that permission of anything comparatively harmless today, may begin a trend that results in unacceptable practices. Although one of the popular stands argues voluntary euthanasia is morally acceptable while non-voluntary euthanasia is always wrong, the legal constitution has been split in their decisions in various instances. (Oxford for OCR Religious Studies, 2016)

Voluntary euthanasia is defined by the killing of an individual upon their approval through various ways. The arguments that voluntary euthanasia is morally acceptable are drawn from the expressed desires of a patient. As far as the respect for an individual’s decision does not harm other people, then it is morally correct. Since individuals have the right to make personal choices about their lives, their decisions on how they should die should also be respected. Most, importantly, at times, it remains the only option of assuring the well-being of the patient especially if they are suffering incessant and severe pain. Despite these claims, several cases have emerged, but the court has continued to refuse to uphold the morality of euthanasia irrespective of a victim’s consent. One of these is the case of Diane Pretty who suffered from motor neuron disease. Since she was afraid of dying by choking/aspiration, a common end of life event experienced by many motor neurone disease victims. She sought to have legal assurance that her husband would be free from the threat of prosecution if he assisted her to end her life. Her case went through the Court of Appeal, The House of Lords (the Supreme Court in today’s system) and the European Court of Human Rights. However, due to the concerns raised under the slippery slope principle, the judges denied her request, and she lost the case.

There have been many legal and legislative battles attempting to change the law to support voluntary Euthanasia in varying circumstances. Between 2002 and 2006 Lord Joel Joffe (a Patron of the Dignity in Dying organisation) fought to change the law in the UK to support assisted dying. His first Assisted Dying (Patient) Bill continued to the stage of a second reading (June 2003) however surpassed the time limit to progress to the committee stage. However, Joffe persisted and in 2004 restated his plight with the Assisted Dying for the Terminally Ill Bill which progressed further to the earlier bill to make it to the committee stage in 2006. The committee stated: “In the event that another bill of this nature should be introduced into Parliament, it should, following a formal Second Reading, be sent to a committee of the whole House for examination”. However, unfortunately in May 2006 an amendment at the Second reading lead to the collapse of the bill. This was a surprise to Joffe, with the majority of the select committee on board with the bill. In addition to this calls for a statute supporting voluntary euthanasia have increased and this can be evidenced by the significant numbers of people in recent years travelling to Switzerland where physician assisted suicide is legal under permitted circumstances. Lord Joffe expressed these thoughts in an article written for the campaign for Dignity In Dying cause in 2014 shortly before his death in 2017 in support of Lord Falconer’s Assisted Dying Bill which was a Bill which proposed to permit the “terminally ill, mentally competent adults to have an assisted death after being approved by doctors” (Falconer’s Assisted Dying Bill, Dignity in Dying, 2014). The journey of this bill was followed by the following referenced documentary.

The BBC documentary ‘How to Die: Simon’s Choice’ followed the decline of Simon Binner from motor neurone disease and his subsequent plight for an assisted death. The documentary followed his journey to Switzerland for a legal assisted death and documented the reactions of his surrounding family. During filming of the documentary, a legal bill was being debated in parliament proposing to legalise assisted dying in the United Kingdom. The bill proposed a new law (The Lord Falconers Assisted Dying Bill) which would allow a person to request a lethal injection if they had less that six months left to live, this raised a myriad of issues including precisely defining a life term whereby one has more or less that six months left to live. The Archbishop of Canterbury, Justin Welby urged MP’s to reject the bill stating that Britain would be crossing a ‘legal and ethical Rubicon’ if parliament were to vote to allow the terminally ill to actively be assisted to die at home in the UK under medical supervision. The leaders of the British Jewish, Muslim, Sikh and Christian religious communities wrote a joint open letter to all members of the British parliament urging them to oppose the bill to legalise assisted dying. (The Guardian, 2015). After announcing his death on LinkedIn, Simon Binner died at an assisted dying clinic in Switzerland. The passing of this bill may have been the only way of helping Simon Binner in his home country, although assisted dying was ruled to be unlawful. (Deacon, 2016)

The result of the private members bill, originally proposed by Rob Marris (a Labour MP from Wolverhampton) ended in defeat in 330 MPs against and 118 MPs in favour. (The Financial Times, 2015)

The 1961 Suicide Act (Legislation, 1961) decriminalised suicide, however it didn’t make it morally licit. It outlines that a person who aids, abets, counsels or procures suicide of another/attempt by another to commit suicide shall be liable to be sentenced to a prison term of up to 14 years. It also provided for the situation of a defendant on trial on indictment for murder/manslaughter it is proved that the accused aided, abetted, counselled or procured the suicide of the person in question, the jury could find them guilty of that offence as an alternative verdict.

Many took that the view that the law supports principle of autonomy, but the act was used to reinforce the sanctity of life principle by criminalising any form of assisted suicide. Although the act doesn’t hold the position that all life is equally valuable, there have been cases when allowing a person to die would be the better solution.

In the case of non-voluntary euthanasia, patients are often incapable of giving their approval for death to be induced. It mostly occurs if a patient is either very young, mentally retarded, has an extreme brain damage, or is in a coma. Opponents argue that human life should be respected and in this case, it is even worse because the victim’s wishes are not factored when making decisions to end their life. As a result, it becomes morally wrong irrespective of the conditions that they face. In such a case, all parties involved should wait for a natural death while at the same time according the patient the best palliative medical attention possible. The case of Terri Schiavo who was suffering from bulimia and with an extremely damaged brain falls under this argument. The ruling of the court allowing the request of her husband to have her life terminated triggered heated debates with some arguing that it was wrong while others saw it as a relief since she had spent more than half of her life unresponsive.

I completed primary research in order to support my findings as to whether it would be moral or not to legalise Euthanasia in the UK. With regard to the having an understanding of the correct definition of Euthanasia nine out of ten people who took part in the questionnaire selected the correct definition of physician-assisted suicide being “The voluntary termination of one’s life by administration of a lethal substance with the direct or indirect assistance of a physician” (Medicanet, 2017). The one person who selected the wrong definition believed it to be “The involuntary termination of one’s own life by administration of a lethal substance with the direct or indirect assistance of a physician. The third definition on the questionnaire stated that physician assisted suicide was “The voluntary termination of one’s own life by committing suicide without the help of others”- this definition is the ‘obvious’ incorrect answer and no participant in the questionnaire selected this answer.

The morality of the young should be followed. From the results of my primary research completed by a selected youth audience seventy percent were in agreement that people should have the right to choose when they die. However only twenty percent of this targeted audience were in agreement that they would assist a friend or family member in helping them die. This drop in support can be supported by the fear that prosecution brings of a possible fourteen year imprisonment for assisting in a person’s death.

The effect of the Debbie Purdy case (2009), was that guidelines were established by the Director of Public Prosecutions in England and Wales (Dying or assisted dying isn’t illegal in Scotland however there is no legal way to medically access it). These guidelines were established according to the Director of Public Prosecutions to “clarify what his position is as to the factors that he regards as relevant for and against prosecution” (DID Prosecution Policy, 2010). The guidance policy outlines ‘more likely’ factors as to when prosecution should take place; for prosecution of an assistor the policy outlined that if they had a history of violent behaviour, didn’t know the person, received a financial gain from the act or acted as a medical professional then they were more likely to face prosecution. However despite these factors the policy stated that police and prosecutors of the case should examine any financial gain with a ‘common sense’ approach as many financially benefit from the loss of a loved one, however the fact that they were a close relative being relieved of pain for example should be a larger factor behind assisting someone to die, to be considered in case of prosecution.

Arguments that state voluntary euthanasia is morally right while involuntary euthanasia is wrong, remains as being one of the most controversial issues even in the modern society. It is even more significant because even the legal systems remain split in their ruling in the various cases such as those cited. Based on the slippery slope argument, care should be taken when determining what is morally right and wrong because of the sanctity of human life. Many consider that the law has led to considerable confusion and that one way of developing the present situation is to create a new Act which permitting physician assisted dying, with the proposal stating that there should be a bill to “enable a competent adult who is suffering unbearably as a result of a terminal illness to receive medical assistance to die at his own considered/persistent request… to make provision for a person suffering from a terminal illness to receive pain relief medication” (Assisted Dying for the Terminally ill Bill, 2004).

There is a major moral objection to voluntary euthanasia under the reasoning of the “slippery slope” argument: the fear that what begins as legitimate reasons to assist in a person’s death will also permit death in other illegal circumstances.

In a Letter addressed to The Times newspaper (24/8/04), John Haldane and Alasdair MacIntyre along with other academics, lawyers and philosophers, suggested that any supporters of the Bill change from making the condition one of actual unbearable suffering from terminal illness to merely the fear, discomfort and loss of dignity which terminal illness might bring. In addition, there is an issue of if quality of life is grounds for euthanasia from those who request it therefore it must be open to those who don’t request it or are unable to request it therefore presenting the issue of a slippery slope. Also in the letter addressed to The Times, the esteemed academics referenced Euthanasia in the Netherlands where it is legal. The purpose of this was to infer that many people have dies against their desire due to safeguarding issues. (Hodder Education, 2016)

In conclusion, upon considering different morality arguments on both sides of the debate, I concluded that the two forms of morality (Natural Law and Situation Ethics) would give two opposing responses in relation to the question.

From the viewpoint of a deontologist who would be guided by natural law, duty and obligation arguably from a religion would lead a society to decide that it would be wrong to legalise Euthanasia. However, from the viewpoint of a situational ethicist whose viewpoint would be changeable depending on the independent situation could support the plight to legalise Voluntary Euthanasia in the UK under guidelines to account for differing situations.

After completing my Primary and Secondary Research, considering the passage of many unsuccessful bills put through parliament to legalise euthanasia and many case studies including the moving account of Simon Binner’s fight to die, my own view rests on the side of a situational ethicist who would believe that depending to the independent situation people should be able to have the right to die in their own country by legalising voluntary euthanasia, rather than being forced to travel abroad to access a legal form of voluntary euthanasia and risk their loved ones being prosecuted on their return to the UK for assisting them.

The slippery slope argument does not help those in particular individual situations and it must surely be wrong to shy away from making difficult decisions on the grounds that an individual should sustain prolonged suffering in order to protect society from the possible extended over use of any legalisation. In practice over the past half century some sort of euthanasia has been going on in the UK when doctors give obvious over-dosage of opiates in terminal cases, but have been shielded from the legal consequences by an almost fictional notion that as long as the motivation was to ease and control pain then the inevitable consequence of respiratory arrest (respiratory suppression is a side effect of morphine type drugs), then the action was lawful.

Discredited and now defunct Liverpool Care Pathway for the Dying Patient (LCP) was an administrative tool used as an attempt to assist UK healthcare professionals to manage the care pathway and deciding palliative care options for patients at the very end of life. As with many such tick the-box-exercises individual discretion is restricted in an attempt to standardise practice nationally (Wales was excluded from the LPA). The biggest problem with the LPA (which attracted much adverse media attention and public concern in 2012) was that most patients or their families were not consulted when they were placed on the pathway. It had options for withdrawing active treatment whilst managing distressing symptoms actively. However, removing intravenous hydration/feeding by regarding it as active treatment would inevitably lead to death in a relatively short period of time making the decision to place a patient on the LPA because they were at the end of life a self-fulfilling prophesy. (Liverpool Care Pathway)

There is a chilling consideration of cost of provision of “just in case” boxes at approximately £25 in the last part of this lengthy document should be part of the process of considering what to advise professionals may seem alarming to some. However there is a moral factor in the financial implications of unnecessarily prolonging human life. Should the greater good be considered when deciding to actively permit formal pathways to euthanasia or to take steps to prohibit it (the crimes of murder or assisting suicide). In the recent highly publicised case of Alfie Evans enormous financial resources were used to keep a child with a terminal degenerative neurological disease alive on a paediatric intensive care unit at Alder Hay hospital in Liverpool for around a year. In deciding to do this it is inevitable that those resources were unavailable to treat others who might have gone on to survive and live a life. Huge sums of money were spent both on medical resources and lawyers. The case became a highly media publicised circus resulting in ugly threats made against medical staff at the hospital concerned. There was international intervention in the case by the Vatican and Italy (granting of Italian nationality to the child). Whist the emotional turmoil of the parents was tragic and the case very sad was it moral that their own beliefs and lack of understanding of the medical issues involved should lead to such a diversion of resources and such terrible effects on those caring for the boy?

(NICE (National Institute of Clinical Excellence) guidelines, 2015)

The General Medical Council (GMC) governs the licensing and professional conduct of doctors in the UK. They have produced guidance for doctors regarding the medical role at the end of life Treatment and care towards the end of life: good practice in decision making. It gives comprehensive advice on some of the fundamental issues dealing with the end of life treatment and it covers issues such as living wills (where withdrawal of treatment requests can be set out in writing and in advance). These are binding both professionally, but as ever there are some caveats regarding withdrawal of life prolonging treatment.

It also sets out presumptions of a duty to prolong life and of a patient’s capacity to make decisions along established legal and ethical viewpoints. I particular it is stated that “decisions concerning life prolonging treatments must not be motivated by a desire to bring about a patient’s death” (Good Medical Practice, GMC Guidance to Doctors, 2014)

Formally the Hippocratic Oath was sworn by all doctors and set out a sound basis for moral decision making and professional conduct. In modern translation from the original ancient Greek it states with regard to medical treatment that a doctor should never treat “….. with a view to injury and wrong-doing. Neither will [a doctor] administer a poison to anybody when asked to do so, nor will [a doctor] suggest such a course. Doctors in the UK do not swear the oath today, but most of its principles are internationally accepted except perhaps in the controversial areas surrounding abortion and end of life care.

(Hippocratic Oath, Medicanet)

In conclusion, upon considering different morality arguments on both sides of the debate, I concluded that the two forms of morality (Natural Law and Situation Ethics) would give two opposing responses in relation to the question.

From the viewpoint of a deontologist who would be guided by natural law, duty and obligation arguably from a religion would lead a society to decide that it would be wrong to legalise Euthanasia. However, from the viewpoint of a situational ethicist whose viewpoint would be changeable depending on the independent situation could support the plight to legalise Voluntary Euthanasia in the UK under guidelines to account for differing situations.

After completing my Primary and Secondary Research, considering the passage of many unsuccessful bills put through parliament to legalise euthanasia and many case studies including the moving account of Simon Binner’s fight to die, my own view rests on the side of a situational ethicist who would believe that depending to the independent situation people should be able to have the right to die in their own country by legalising voluntary euthanasia, rather than being forced to travel abroad to access a legal form of voluntary euthanasia and risk their loved ones being prosecuted on their return to the UK for assisting them.

At the end of the day, much of the management of the end of life of patients is not determined by the stipulations laid out by committees in lengthy documents, but by the individual treatment decisions made by individual doctors and nurses who are almost always acting in the best interests of patients and their families. The methodology of accelerating the inevitable event by medication or withdrawal of treatment is almost impossible to standardise across a hospital or local community care setup, let alone a country. It may be a better way to continue the practice of centuries and let the morality and conscience of the treating professions determine what happens and keep the formal moral, religious and legal factors involved in such areas in the shadows.

2018-5-4-1525394652

Has the cost of R & D impacted vaccine development for Covid-19?

Introduction

This report will be investigating and trying to answer the question of: ‘To what extent have the cost requirements of R&D, structure of the industry and government subsidy affected firms in the pharmaceutical industry in developing vaccines for Covid-19?’. The past two years have been very unpredictable for the pharmaceutical industry regarding the breakout of the COVID-19 pandemic. Despite the fact that the pharmaceutical industry has made major contributions to human wellbeing with regards to the reduction of suffering and ill health for over a century, the industry still remains one of the least trusted industries based on public opinions. It is even often compared to the nuclear industry in terms of trustworthiness. Despite being one of the riskiest industries to invest money into, governments have subsidised billions into the production of the COVID-19 vaccines. Regardless of the fact of the associated risks that come with pharmaceuticals, a big part of the public still thinks pharmaceuticals should continue to be produced and developed in order to provide the correct treatment to those with existing health issues (Taylor, 2015). This along with further aspects affecting the requirements of R&D, structure of the industry and government subsidy and how these have affected firms in the pharmaceutical industry with regards to the development of the COVID-19 vaccines will be discussed further in the report.

The Costs of R&D

Back in 2019, $83 billion was spent on R&D. That figure alone is roughly 10 times greater than what the industry spent on R&D in the 1980s. Most of this amount was dedicated to testing and discovering new drugs and clinical testing with regards to safety of the drug. In 2019 drug companies dedicated a quarter of their annual income to R&D which is also an increase of almost double since the early 2000s.

(Pharmaceutical R&D Expenditure Shows Significant Growth, 2019)

Usually the amount spent on R&D of a new drug by drug companies is based on the financial return they expect to make, any policies influencing the supply and demand for drugs and the cost of developing these drugs.

Most drugs that have been approved recently have been specialty drugs. These are drugs that typically treat issues such as complex, chronic or rare conditions and can require patient monitoring. However, specialty drugs are very expensive to develop, pricey for the customer and hard to remake (Research and Development in the Pharmaceutical Industry, 2021).

Government subsidies for the COVID-19 vaccines

There are two main ways in which a federal government can have a direct impact in supporting vaccine development. This is either done by making a promise to purchase a successful vaccine in advance once the firm has successfully achieved its specified goal with the vaccine, or they can cover any costs associated with the R&D of the vaccine.

(Which Companies Received The Most Covid-19 Vaccine R&D Funding?, 2021)

The Department of Health and Human Services in the month of May 2020, launched ‘Operation Warp Speed’. This was a collaborative project in which the FDA, the Department of Defence, the National Institutes of Health and the Centre for Disease Control and Prevention all worked together to provide funding for the COVID-19 vaccine development. Through ‘Operation Warp Speed’, more than $19 billion was provided in funding by the federal government to help seven different private pharmaceutical manufacturers in the development and research of COVID-19 vaccines. A further five out of seven of those went on to accept further funding in order to help these companies boost the production capabilities of the vaccines. Later a sixth company accepted funding in order to help boost the production of another company’s vaccines as they received authorization for emergency use. Then six of the seven also made a deal for an advance purchase. Two of these companies received additional funding as they sold more doses than they expected to during the advance purchase agreements, in order for them to develop even more vaccines to distribute. Due to the simultaneous execution of combining numerous stages of development that in normal cases would be developed in consecutive order, it allowed pharmaceutical manufacturers to reach their end goal and manufacture vaccines at a rate a lot higher than normal when it comes to vaccines. This was done due to the urgency of a solution to the COVID-19 pandemic, as it was starting to cause public uproar and panic amongst nations. As soon as the first COVID-19 diagnoses was made in the US, two vaccines were already at Phase III clinical trials, and this is immensely quick, as it would usually take around a few years of research in order to reach Phase III in clinical trials for a vaccine. The World Health Organisation claims that there were already over 200 COVID-19 vaccine development candidates in the time period of February 2021 (Research and Development in the Pharmaceutical Industry, 2021).

(Research and Development in the Pharmaceutical Industry, 2021)

The image above shows what vaccines were at which stage of development during what time period. This shows the urgency that was there in order to develop and produce these vaccines to fight the outbreak of the coronavirus. Without these government subsidies, firms would have been nowhere near completing the research and development needed in order to produce numerous COVID-19 vaccines. This shows the importance that government subsidies have on the pharmaceutical industry and the development of new drugs and vaccines.

Impact of the structure of the pharmaceutical industry on vaccine development

When it came to the development of the COVID-19 vaccines, many different names in the pharmaceutical industry took part. Now as far as the majority of society is concerned, the pharmaceutical industry is just a small group of large multinational corporations such as GlaxoSmithKline, Novartis, AstraZeneca, Pfizer and Roche. These are frowned upon by the public as they are stereotyped to be the ‘Big Pharma’ and so they can be misleading. Many people have their if’s and doubts about these big multinational corporations especially when they have such an influence on their health and the drugs they develop. It becomes hard for the public to rely and trust these companies because at the end of the day it is their health that they are trusting these companies with. So therefore it is logical that a lot of people will have had and still do have their suspicions about the COVID-19 vaccines developed by a handful of these companies. If you were to ask someone whether or not they have ever heard of companies like Mylan or Teva, they would probably have no clue about them even though Teva is the world’s 11th biggest pharmaceutical company and probably produces the medicine that these people take on a regular basis. The fact that over 90% of pharmaceutical companies are basically almost invisible to the general public obviously means that when it does become known to the public who has manufactured a medicine they are considering taking, for example the Pfizer vaccine, people are going to be careful and suspicious about taking this vaccine as they have probably never heard of the company Pfizer before. All this, despite it being that these companies are responsible for producing a majority of the medicines that everyone takes.

Most new drugs that are produced never even make it onto the market as the drug is found to not work or to have serious side effects, making it unethical to use on patients. However, the small percentage of drugs that do make it onto the market are patented, meaning that the original manufacturer only holds temporary rights to sell the product. Once this has expired, the pharmaceutical is free to sell and manufacture by anyone, meaning it is now a generic pharmaceutical (Taylor, 2015).

This again does not help research pharmaceutical companies, as their developments which are now out of patent, are just being sold by generic pharmaceutical companies where everyone goes to buy their pharmaceuticals. This means generic pharmaceutical companies basically never have a failed product and the research companies are barely able to create a successful product to make it out onto the market. This again causes the public to not even know that the majority of drugs they buy come from these research companies and are not originally procured by the generic pharmaceutical company they buy them from.

As seen with the COVID-19 vaccines, this caused a lot of uncertainty and distress amongst the public as most people had never even heard of companies like ‘Pfizer’ or ‘AstraZeneca’. This in turn made it more difficult for pharmaceutical companies to successfully manufacture and sell their vaccine, prolonging the whole vaccination process.

Due to this structure of the pharmaceutical industry, it has affected firms greatly in their ability to successfully and reliably manufacture vaccines against COVID-19.

Conclusion

Looking at the three factors combined: cost requirements of R&D, structure of the industry and government subsidy, it is clear that these have all had a great impact in the development of the COVID-19 vaccines. The costs associated with R&D in the development of the COVID-19 vaccines, essentially determines how successful the vaccines would be and whether or not they would have enough to first of all do the needed research and then to finally produce and sell them. Without the large number of costs that go into the development of vaccines and other drugs, the COVID-19 vaccines will have never been able to be manufactured and sold. This will have left the world in even more panic and uproar than it was/is. If this would’ve happened, it can easily have a ripple effect on economies, social factors and maybe even potentially other factors such as environmental factors.

One of the biggest impacts on the successful manufacturing and sale of the vaccines was to do with the structure of the industry. With big research pharmaceutical companies putting in all the work and effort to develop these COVID-19 vaccines but with most of the general public not ever even having heard of them before, it made it very hard for pharmaceutical companies to come across as reliable. People didn’t trust the vaccines as they had never heard of the company who developed it, such as Pfizer. This caused debate and protest against these vaccines, making it harder for companies to produce and successfully sell their vaccines to the public who were in need of them and demanded them. This was due to one major flaw in the pharmaceutical industry, which is the fact that companies such as Pfizer and AstraZeneca are kept under the rug and are barely even known by the public as all their products are just taken and sold on by generic pharmaceutical companies where people can buy them from. It also has to do with the fact that research pharmaceutical companies specialise in advanced drugs and not in more generic drugs which are more likely to be successful as they are easier to develop. So naturally the lack of successful products produced will reflect negatively on these companies although the one product they do successfully produce will also be frowned upon due to its previously non viable products.

Then finally, probably the second or joint most important factor is government subsidies. It is quite clear that without the correct government funding and without ‘Operation Warp Speed’ we’d still be in the process of trying to develop even the first COVID-19 vaccine as there will have been nowhere near enough funding for the R&D of the vaccines. This would’ve resulted in the death rate of coronavirus infections to spike, and will have probably put the economy on a complete standstill putting a large number of people out of work. All of this has numerous ripple effects, as just the one issue of loss of work could spike the poverty rate immensely, leaving economies broken. So overall, these three factors have had a huge impact on firms in the pharmaceutical industry in developing the COVID-19 vaccines.

2022-1-5-1641412725

Gender in Design: essay help free

Gender has always had a dominant place in design. Kirkham and Attfield in their 1996 book, The Gendered Object, set out that in their view that there are attributable genders which seem to be unconsciously attached to some objects as the norm. Making the distinction between how gender is viewed in modern day design compared to twenty plus years ago is now radically different in that there is now recognition of this normalization. Having international companies recognise this change and adapt their brands and companies to relate to this modern day approach influences designers like myself to keep up to date and affect my own work.

When designing there is Gender system some people tend to follow very strictly, the system is a guide that works with values that reveals the gender formation in mankind. In the gender system you have binary opposition which takes action in colour, size, feeling and shape, for example pink/blue, small/large, smooth/rough and organic/geometric. Without even thinking the words give off synonyms of male or female without even putting them in context. Gender’s definition is traditionally Male or Female but modern day brands are challenging and pushing these established boundaries. They don’t think they should be restrictive or prescriptive as they have been in the past. Kirkham and Attfield challenge this by comparing perceptions in the early twentieth century illustrating that the societal norms were the opposite to what we are now made to believe by gender norms. A good example of this is the crude binary opposition implicit in ‘pink for a little girl and blue for a boy’ was only established in the 1930’s; babies and parents managed perfectly well without such colour coding before then. Today through marketing and product targeting these ‘definitions’ are even more widely used in the design and marketing of children clothes and objects than a few years ago. Importantly, such binary oppositions also influence those who purchase objects, and, in this case, facilitate the pleasures of many adults take in seeing small humans visibly marked as gendered beings. This is now being further challenged by the demands for non-binary identification.

This initial point made by Kirkham and Attfield in 1996 is still valid. Even though the designers and brands are in essence guilty of forms of discrimination by falling in line with using the established gender norms, they do it because it’s what their consumers want and how they see development of business and creation of profit, because these stereotypical ‘Norms’ are seen to be Normal, acceptable and sub-consciously recognisable. “Thus we sometimes fail to appreciate the effects that particular notions of femininity and masculinity have on the conception, design, advertising, purchase, giving and uses of objects, as well as on their critical and popular reception”. (Kirkham and Attfield. 1996. The Gendered Object, p. 1).

With the help of the product language, gendered toys and clothes appear from an early age. The products are sorted as being ‘for girls’ and ‘for boys’ in the store as identified by Ehrnberger, Rasanen, Ilstedt, in 2012 in the article ‘Visualising Gender Norms in Design. International Journal of Design’. Product language is mostly used in the branding aspect of design, how a product or object is portrayed, it’s not only what the written language says. Product language relates to how the object is being showcased and portrayed through colours, shapes and patterns. A modern example of this is the branding for a Yorkie chocolate bar. Their slogan was publicly known as being gender bias towards mens. ‘Not for girls’, there is no hiding the fact that the language the company are using is being targeted at men because they are promoting a brand that is strong, chunky and ‘hard’ in an unsophisticated way which all have connotations of being ‘male’ and actually arguably as ‘alpha male’ to make it more attractive to men. Their chosen colours also suggest this with using navy blue, dark purple, yellow and red which are bold and is typically a ‘male’ generated pallette. Another example would be the advertisement of tissues. Tissues no matter where you buy them do the exact same thing irrespective of gender so why are some tissues being targeted at woman and some at men, could it be that this gender targeting be avoiding neutrality helps sell more tissues.

Product Language is very gender specific when it comes to clothing brands and toys for kids. “Girls should wear princess dresses, play with dolls and toy housework products, while boys should wear dark clothes with prints of skulls or dinosaurs, and should play with war toys and construction kits”. (Ehrnberger, Rasanen, Ilstedt, 2012. Visualising Gender Norms in Design. International Journal of Design). When branding things for children having the separation between girl and boy is extremely common, using language like ‘action’ which has male connotations or ‘princess’ which has female connotations appeals to the consumer because they are relatable words to them and to their children as well. In modern society most people find it difficult not to identify blue for boys and pink for girls especially from newborns. If you were to walk into any department store/ toy store or any store that caters to children you will see the separation between genders no matter if it is clothes to toys or anything in between. The separation is so obvious through the colour branding used. Girl side, pink, yellow, lilac are used, soft bright happy colours being used on toy babies and dolls to hats and scarfs. Conversely on the boys side blue, green and black, bold, dark, more primary colours being used for trucks to a pair of trousers.

Some companies have begun to notice how detrimental the separation is developing into and how it could possibly create a hold in advancing and opening up our society, example being John Lewis Partnership.

John Lewis is a massive department store, that has been in business for nearly fifty years. In 2017 they decided to scrap the girls section and boys sections for the clothing range in their store, and name it ‘Childs wear’ a gender neutral name. Allowing them to design clothing that allows children to wear whatever they want without being told ‘no, that is a boys top you can’t wear that because you’re a girl’ or vice versa. Caroline Bettis, head of children’s wear at John Lewis, said: “We do not want to reinforce gender stereotypes within our John Lewis collections and instead want to provide greater choice and variety to our customers, so that the parent or child can choose what they would like to wear”. Possibly the only issue with this stance is the price point, John Lewis is typically known for being a higher priced, high street store which means it isn’t accessible for everyone to shop there. Campaign group Let Clothes be Clothes commented on this “Higher-end, independent clothing retailers have been more pro-active at creating gender-neutral collections, but we hope unisex ranges will filter down to all price points. We still see many of the supermarkets, for example, using stereotypical slogans on their clothing,” (http://www.telegraph.co.uk/news/2017/09/02/john-lewis-removes-boys-girls-labels-childrens-clothes/).

Having a very well-known brand make this move should only enforce, encourage and inspire others to join in with the development. This change is a bold way of using Product language, even though it’s not for just one specific thing its advertising and marketing as well, meaning it is a whole rebrand of company, by not using gender specific words it takes away the automatic stereotypes you get when buying anything for children.

Equality is the state of being equal, be it in status, rights or opportunities, so when it comes to design why does this attribute get forgotten about. This isn’t a feminist rant, gender equality is affected in both male and females in the design world, when designing, everything should be equal and fair to both sexes. “Gender equality and equity in design is often highlighted, but it often results in producing designs that highlight the differences between men and women, although both the needs and characteristics vary more between individuals than between genders” (Hyde 2005). Hyde’s point is still contemporary and relevant, having gender equality in design is very important, but gender isn’t the sole issue, things can be designed for a specific gender but even if you are female you might not relate to the gender specific clothes for your sex. Design is to make and create something for someone or thing, not just gender. “Post- feminism argues that in an increasingly fragmented and diverse world, defining one’s identity as male or female is irrelevant, and can be detrimental”. (https://www.cl.cam.ac.uk/events/experiencingcriticaltheory/Satchell-WomenArePeople.pdf).

Recently many more up and coming independent brands and companies have been launching Unisex clothing brands for a multiple of years, most have been doing it and pushing the movement well before the topic of gender equality in design got into mainstream media as an issue. One company pushing out gender norms is Toogood London and another is GFW, Gender Free World. Gender Free World is a company that was created by a group of people who all think on the same wavelength when it comes to equality in gender. In fact their ‘Mission Statement’ sets this out as a core ethos (which incidently is obviously an influence on John Lewis when you look at the transferability of the phraseology) “GFW Clothing was founded in 2015 (part of Gender Free World Ltd) by a consortium of like-minded individuals who passionately believe that what we have in our pants has disproportionately restricted the access to choice of clothing on the high street and online.” https://www.genderfreeworld.com/pages/about-g. Lisa Honan is the cofounder of GFW, her main reason for starting a company like this was through ‘sheer frustration’ due to the lack of options for her taste and style on the market, with this she has shopped in male and female departments but never found anything fitted either especially if she was going for a male piece of clothing. During an interview with Honan by Saner she commented that the men’s shirts didn’t fit her because she had a woman’s body and iIt got her thinking, ‘ why is there a man’s aisle and a woman’s aisle, and why do you have to make that choice?’. She saw that you’re not able to make many purchases without being forced to define your own gender and this is reinforcing the separation between genders in fashion, if she feels this way many others must too, and they do or there wouldn’t be such a potential big business opportunity for it.

In my design practice of Communication Design, gender plays a huge role. Be it from colour choices, to certain typefaces being used, most work Communication designers need to create and produce, will either be to represent a brand or to actually brand a company, so when choosing options, potential gender stereotyping should come into consideration. The points mentioned above, showing how using the gender system, product language, gender norms and having equality and equity in design, reinforces graphic designers in a cautionary manner not to not fall down any pit holes when designing.

Designing doesn’t mean simply male or female, designing means to create and produce ‘something’ for ‘someone’ no matter their identifiable or chosen gender. If they are a company producing products targeted specifically at men and after a robust design concept examination I felt that using blue would enhance their brand and awareness to their target demographic then blue would be used, in just the same way using pink for them if it works for the customer, then put simply it works.

To conclude, exploring the key points of gender in the design world, only showcases the many issues there are.

2017-12-11-1513023430

The stigma surrounding mental illness: essay help free

Mental illness is defined as a health problem resulting from complex interactions between an individual’s mind, body and environment which can significantly affect their behavior, actions and thought processes. A variety of mental illnesses exist, impacting the body and mind differently, whilst affecting the individual’s mental, social and physical wellbeing to varying degrees. A range of psychological treatments have been developed in order to assist people living with mental illness, however social stigma can prevent individuals from successfully engaging with these treatments. Social or public stigma is characterized by discriminatory behavior and prejudicial attitudes towards people with mental health problems resulting from the psychiatric label they possess (Link, Cullen, Struening & Shrout, 1989). The stigma surrounding labelling oneself with a mental illness causes individuals to hesitate in regards to seeking help as well as resistance to treatment options. Stigma and its effects can vary depending on demographic factors including age, gender, occupation and community. There are many strategies in place to attempt to reduce stigma levels which focus on educating people and changing their attitudes towards mental health.

Prejudice, discrimination and ignorance surrounding mental illnesses results in a public stigma which has a variety of negative social effects towards individuals with mental health problems (Thornicroft et al 2007). An understanding of how stigma can be gained through the Attribution Model which identifies four steps involved in the formation of a stigma (Link & Phelan, 2001). The first step in the formation of a stigma is ‘labelling’, whereby key traits are recognized as portraying a significant difference. The next step is ‘stereotyping’ whereby these differences are defined as undesirable characteristics followed by ‘Separating’ which makes a distinction between ‘normal’ people versus the stereotyped group. Stereotypes surrounding mental illnesses have been developing for centuries, with early beliefs being that individuals suffering from mental health problems were possessed by demons or spirits. ‘Explanations’ such as these, promoted discrimination within the community, preventing individuals from admitting any mental health problems due to a fear of retribution (Swanson, Holzer, Ganju & Jono, 1990). The final step in the Attribution model described by Link and Phelan is ‘Status Loss’ which leads to the devaluing and rejection of individuals in the labelled group (Link & Phelan, 2001). An individual’s desire to avoid the implications of public stigma causes them to avoid or drop out of treatment for fear of being associated with negative stereotypes (Corrigan, Druss and Perlick, 2001). One of the main stereotypes surrounding mental illness, especially depression, and Post Traumatic Stress Disorder is that people with these illnesses are dangerous and unpredictable (Wang & Lai, 2008). Wang and Lai carried out a survey whereby 45% of participants considered people with depression as dangerous, however these results maybe subject to some reporting bias, yet a general inference can be made. Another survey found that a large proportion of people also confirmed that they were less likely to employ someone with mental health problems (Reavley & Jorm, 2011). This study highlights how public stigma can affect employment opportunities, consequently creating a greater barrier for anyone who would benefit from seeking treatment.

Certain types of stigma are unique and consequently more severe to certain groups within society. Approximately 22 soldiers or veterans commit suicide every day in the United States due to Post Traumatic Stress Disorder (PTSD) and depression. A study was performed surveying soldiers and found that out of all the people who met the criteria for a mental illness, only 38% would be interested in receiving help and only 23-30% actually ended up receiving professional help (Hoge et al, 2004). There is an enormous stigma surrounding mental illness within the military, due to their high values in mental fortitude, strength, endurance and self sufficiency (Staff, 2004). A soldier who admits to having mental health problems is deemed as not adhering to these values thus appearing weak or dependent, therefore placing a greater pressure on the individual to deny or hide any mental illness. Another contributor to soldiers avoiding treatment is a fear of social exclusion as it is common in military culture for some personnel to socially distance themselves from soldiers with mental health problems (Britt et al, 2007). This exclusion is due to the stereotype that mental health problems make a soldier unreliable, dangerous and unstable. Surprisingly, individuals with mental health problems who seek treatment are deemed more emotionally unstable than those who do not, thus the stigma surrounding therapy creates a barrier for individuals to start or continue their treatment (Porath, 2002). Furthermore, soldiers are also faced with the fear that seeking treatment will negatively affect their career, both in and out of the military, with 46 percent of employers considering PTSD as an obstacle when hiring veterans in a 2010 survey (Ousley, 2012). The stigma associated with mental illness in the military is extremely detrimental to the soldiers’ wellbeing as it prevents them from seeking or successfully engaging in the treatment for mental illnesses which have tragic consequences.

Adolescents and young adults with mental illness have the lowest rate for seeking professional help and treatment, despite the high occurrence of mental health problems. (Rickwood, Deane & Wilson, 2007). Adolescents’ lack of willingness to seek help and treatment for mental health problems is catalyzed by the anticipation of negative responses from family, friends and school staff. (Chandra & Minkovitz, 2006). A Queensland study of people aged 15–24 years showed that 39% of the males and 22% of the females reported that they would not request help for emotional or distressing problems (Donald, Dower, Lucke & Raphael, 2000). A 2010 survey of adolescents with mental health problems found that 46% described experiencing feelings of distrust, avoidance, pity and prejudice from family members. This portrays how negative family responses and attitudes impact an individual by creating a significant barrier to seeking help (Moses, 2010). Similarly, a study on adolescent depression also noted that teenagers who felt more stigmatized, particularly within the family, were less likely to seek treatment (Meredith et al., 2009). Furthermore, adolescents with unsupportive parents would struggle to pay expenses for treatment and transportation, further preventing successful treatment of the illness. Unfortunately, the generation of stigma is not unique to just family members, adolescents also report having felt discriminated by peers and even school staff (Moses, 2010). The main step to seeking help and engaging in treatment for mental illness is to acknowledge that there is a problem and to be comfortable enough to disclose this information to another person (Rickwood et al, 2005). However, in another 2010 study of adolescents, many expressed fear of being bullied by peers, subsequently leading to secrecy and shame (Kranke et al., 2010). The role of public stigma in generating this shame and denial is significant and thus can be defined as a factor in preventing adolescents from seeking support for their mental health problems. A 2001 study testing the relationship between adherence to medication (in this case, antidepressants) and perceived stigma levels determined that individuals who accepted the antidepressants were found to have lower perceived stigma levels (Sirey et al, 2001). This empirical data clearly illustrates the correlation between public stigma levels and an individual’s engagement in treatment, thus inferring that stigma remains a barrier for treatment. Public stigma can therefore be defined as a causative factor in the majority of adolescents not seeking support or treatment for their mental health problems.

One of the main strategies performed by society to assist in the reduction of the public stigma surrounding mental illness is education. Educating people about the common misconceptions of mental health challenges the inaccurate stereotypes and substitutes them with factual information (Corrigan et al., 2012). There is sufficient proof that people who have more information about mental health problems are less stigmatizing than people who are misinformed about them (Corrigan & Penn, 1999). The low cost and far-reaching nature are beneficial aspects of the educational approach. Educational approaches are often carried out on adolescents as it is believed that by educating children about mental illness, stigma can be prevented from emerging in adulthood (Corrigan et al., 2012). A 2001 study testing the effect of education on 152 students found that levels of stigmatization were lessened following the implementation of the strategy (Corrigan et al, 2001). However, it was also determined that by combining a contact based approach with the educational strategy would yield the highest levels of stigma reduction. Studies have also shown that a short educational program can be effective at reducing individuals’ negative attitudes toward mental illness and increases their knowledge on the issue (Corrigan & O’Shaughnessy, 2007). The effect of an educational strategy varies depending on what type of information is being communicated towards people. The information provided should deliver realistic descriptions of mental health problems and their causes as well as emphasizing the benefits of treatment. By delivering accurate information to people, the negative stereotypes surrounding mental illness can be decreased and the publics views on the controllability and treatment of psychological problems can be altered (Britt et al, 2007). Educational approaches mainly focus on improving knowledge and attitudes surrounding mental illness and do not focus directly on changing behavior. Therefore, a link cannot be clearly made as to whether educating people actually reduces discrimination. Although this remains a major limitation in today’s society, educating people at an early age can ensure that in the future discrimination and stigmatization will decrease. Reducing the negative attitudes surrounding mental illness can encourage those suffering from mental health problems to seek help. Providing individuals with correct information regarding the mechanisms and benefits of treatment, such as psychotherapy or drugs like antidepressants, increases their own mental health literacy and therefore increases the likelihood of seeking treatment (Jorm and Korten, 1997). People who are educated about mental health problems are less likely to believe or generate stigma surrounding mental illnesses and therefore contribute to reducing stigma which in turn will increase levels of successful treatment for themselves or other individuals.

The public stigma surrounding mental health problems is defined by negative attitudes, prejudice and discrimination. This negativity in society is very debilitating towards any individual suffering from mental illness and creates a barrier for seeking out help and engaging in successful treatment. The negative consequences of public stigma for individuals is to be excluded, not considered for a job or for friends and family to become socially distant. By educating people about the causes, symptoms and treatment of mental illnesses, stigma can be reduced as misinformation is usually a key factor in the promotion of harmful stereotypes. An individual will more likely engage in successful treatment if they are accepting of their illness and if stigma is reduced.

2016-10-9-1475973764

Frederick Douglass, Malcolm X and Ida Wells

Civil Rights are “the rights to full legal, social, and economic equality” . Following the American Civil War, slavery was officially abolished December 6th, 1865 in the United States of America (US). The Fourteenth and Fifteenth Amendments established a legal framework for political equality for African Americans; many thought that this would lead to equality between white and blacks however this was not the case. Despite slavery’s abolition Jim Crow racial segregation in the South meant that blacks would be denied political rights and freedoms and they would continue to live in poverty and inequality. It took nearly 100 years of campaigning until the Civil Rights and Voting Rights Acts were passed, making it illegal to discriminate based on race, colour, religion, sex or national origin and ensuring minority voting rights. Martin Luther King was prominent in the Modern Civil Rights Movement (CRM), playing a key role in legislative and social change. His assassination in 1968 marked the end of a distinguished life helping millions of African Americans across the US. The contribution played by black activists including political Frederick Douglass, militant Malcolm X and journalist Ida Wells throughout the period will be examined from a political, social and economic, perspective. When comparing their significance to that of King, consideration must be given to the time in which activists were operating and to prevailing social attitudes. Although King was undeniably significant it was the combined efforts of all the black activists and the mass protest movement in the mid-20th century that eventually led to African Americans gaining civil rights.

The significance of King’s role is explored through Clayborne Carson’s, ‘The Papers of Martin Luther King’ (Appendix 1). Carson, a historian at Stanford University, suggests that “the black movement would probably have achieved its major legislative victory without King’s leadership” Carson does not believe King was pivotal in gaining civil rights, but that he quickened the process. The mass public support shown in the March on Washington, 1963, suggests that Carson is correct in arguing that the movement would have continued its course without King. However, it was King’s oratory skill in his ‘I have a Dream’ speech that was most significant. Carson suggests key events would still have taken place without King. “King did not initiate…” the Montgomery bus boycott rather Rosa Parks did. His analysis of the idea of a ‘mass movement’ furthers his argument of King’s less significant role. Carson suggests that ‘mass activism’ in the South resulted from socio-political forces rather than ‘the actions of a single leader’. King’s leadership was not vital to the movement gaining support and legislative change would have occurred regardless. The source’s tone is critical of his significance but passive in the dismissal of King’s role. Phrases such as “without King” are used to diminish him in a less aggressive manner. Carson, a civil rights historian with a PhD from UCLA has written books and documentaries including ‘Eyes on the Prize’ and so is qualified to judge. The source was published in 1992 in conjunction with King’s wife, Coretta, who took over as head of the CRM after King’s assassination and extended its role to include women’s rights and LGBT rights. Although this may make him subjective, he attacks King’s role suggesting he presents a balanced view. Carson produced his work two decades after the movement and three decades before the ‘Black Lives Matter’ marches of the 21st century, and so was less politically motivated in his interpretation. The purpose of his work was to edit and publish the papers of King on behalf of The King Institute to show King’s life and the CRM he inspired. Overall, Carson argues that King had significance in quickening the process of gaining civil rights however he believes that without his leadership, the campaigning would have taken a similar course and that US mass activism was the main driving force.

In his book ‘Martin Luther King Jr.’ (Appendix 2) historian Peter Ling argues, like Carson, that King was not important to the movement but differs suggesting it was other activists who brought success and not mass activism. Ling believes that ‘without the activities of the movement’ King might just have been another ‘Baptist preacher who spoke well.’ It can be inferred that Ling believes King was not vital to the CRM and was just a good orator.

Ling’s reference to activist Ella Baker 1903-86 who ‘complained that “the movement made Martin, not Martin the Movement”’ suggests the King’s political career was of more importance to him than the goal of civil rights. Baker told King she disapproved of his being hero worshipped and others argued that he was ‘taking too many bows and enjoying them’. Baker promoted activists working together, as seen through her influence in the Student Nonviolent Coordinating Committee (SNCC). Clearly many believed King was not the only individual to have an impact on the movement, and so Ling’s argument that multiple activists were significant is further highlighted.

Finally, Ling argues that ‘others besides King set the pace for the Civil Rights Movement’ which explicitly shows how other activists working for the movement were the true heroes, they orchestrated events and activities yet it was King that benefitted. However King himself suggested that he was willing to use successful tactics suggested by others. The work of activists such as Philip Randolph who organise the 1963 March highlights how individuals played a greater role in moving the CRM forward than King. The tone attacks King using words such as ‘criticisms’ to diminish King’s role. Ling says that he has ‘sympathy’ for Miss Baker showing his positive tone towards other activists.

Ling was born in the UK studying History at Royal Holloway College and a MA in American Studies, Institute United States Studies, London. This gives Ling an international perspective, making him less subjective as he has no political motivations nevertheless this makes his interpretation limited in that he has no primary knowledge of civil rights in the US. The book was published in 2002 consequently this gives Ling hindsight making his judgment more accurate and less subjective as he is no longer affected by King’s influence. Similarly, his knowledge of American history and the CRM makes his work accurate. Unlike Carson who was a black activist and attended the 1963 March, White Ling was born in 1956 and was not involved with the CRM and so will have a less accurate interpretation. A further limitation is his selectivity; he gives no attention to the successes of King, including his inspiring ‘I had a dream speech’. As a result, it is not a balanced interpretation and thus its value is limited.

Overall, although weaker than Carson’s interpretation, Ling does give an argument that is of value when understanding King’s significance. Both revisionists, the two historians agree that King was not the most significant reason to gaining civil rights however differ on who or what they see as more important. Carson argues that mass activism was vital in success whereas Ling believes it to be other activists.

A popular pastor in the Baptist Church, King was the leader of the CRM when it gained black rights successes in the 1960s. He demonstrated the power of the church and NAACP in the pursuit of civil rights His oratory skills ensured many blacks and whites attended the protests and increased support. He understood the power of the media in getting his message to a wide audience and in putting pressure on the US government. The Birmingham campaign 1963, where peaceful protestors including children were violently attacked by police and his inspirational ‘Letter from Birmingham Jail’ that King wrote were heavily publicised. US society gradually sympathised with the black ‘victims’. Winning the Nobel Peace Prize gained the movement further international recognition. King’s leadership was instrumental in the political achievements of the CRM, inspiring the grassroots activism needed to apply enough pressure on government, which behind the scenes activists like Baker had worked tirelessly to build. Nevertheless there had been a generation of activists who played their parts often through the church publicising the movement, achieving early legislative victories and helping to kick-start the modern CRM and the idea of nonviolent civil disobedience. King’s significance is that he was the figurehead of the movement at the time when civil rights were eventually given.

Pioneering activist Frederick Douglass 1818-95 had political significance to the CRM holding federal positions which enabled him to influence government and Presidents throughout the Reconstruction era. He is often called the ‘father of the civil rights movement’. Douglass held several prominent roles including US Marshall for DC. He was the first black to hold high office in government and in 1872 the first African American nominated for US Vice President particularly significant as blacks’ involvement in politics was severely restricted at the time. Like King he was a brilliant orator, lecturing on civil rights in the US and abroad. When compared to King Douglass was significant in the CRM. He promoted equality for blacks and whites, although unlike King he did not ultimately achieve black civil rights this was because he was confined by the era that he lived.

The contribution of W.E.B Du Bois 1868-1963 was significant as he laid the foundations for future black activists, including King, to build. In 1909 he established The National Association for the Advancement of Coloured People (NAACP) the most important 20th century black organisation other than the church. King became a member of NAACP and used it to organise the bus boycott and other mass protests. As a result, the importance of Du Bois to the CRM is that King’s success depended on NAACP therefore Du Bois is of similar significance, if not more so than King in pursuing black civil rights.

Ray Stanard Baker’s article in 1908 for The American Magazine speaks of Du Bois’ enthusiastic attitude to the CRM, his intelligence and knowledge of African Americans. (Appendix 3) The quotation of Du Bois at the end of the extract reads “Do not submit! agitate, object, fight,” showing he was not passive but preaching messages of rebellion. The article describes him with vocabulary such as “critical” and “impatient” showing his radical passionate side. Baker also states Du Bois’ contrasting opinions compared to Booker T Washington one of his contemporary black activists. This is evident when it says “his answer was the exact reverse of Washington’s” demonstrating how he was different to the passive, ‘education for all’ Washington. Du Bois valued education, but believed in educating an elite few, the ‘talented tenth’ who could strive for rapid political change. The tone is positive towards Du Bois praising him for being a ferocious character dedicated to achieving civil rights. Through phrases such as “his struggles and his aspirations” this dedicated and praising tone is developed. The American Magazine founded in 1906 was an investigative US paper. Many contributors to the magazine were ‘muckraking’ journalists meaning that they were reformists who attacked societal views and traditions. As a result, the magazine would be subjective, favouring radical Du Bois’, challenging the Jim Crow South and appealing to its radical target audience. The purpose of the source was to confront the racism in the US and so would be political motivated making it subjective regarding civil rights. However some evidence suggests that Du Bois was not radical, his Paris Exposition in 1900 showed the world real African Americans. Socially he made a major contribution to black pride contributing to the black unity felt during the Harlem Renaissance. The Renaissance popularised black culture and so was a turning point in the movement, in the years after the CRM grew in popularity and became a national issue. Finally, the source refers to his intelligence and educational prowess; he carried out economic studies for the US Government and was educated at Harvard and abroad. As a result, it can be inferred that Du Bois rose to prominence and made a significant contribution to the movement due to his intelligence and his understanding of US society and African American culture. One of the founders of the NAACP his significance in attracting grassroots activists and uniting black people was vital. The NAACP leader Roy Wilkins at the March on Washington highlighted his contribution following his death the day before, and said, “his was the voice that was calling you to gather here today in this cause.” Wilkins is suggesting that Du Bois had started the process which had led to the March.

Rosa Parks 1913-2005 and Charles Houston 1895-1950 were NAACP activists who benefitted from the work of Du Bois and achieved significant political success in the CRM. Parks the “Mother of the Freedom Movement.” was the spark that ignited the modern CRM by protesting on a segregated bus. Following her refusal to move to the black area she was arrested. Parks, King and NAACP members staged a yearlong bus boycott in Montgomery. Had it not been for Parks, King may never have had the opportunity to rise to prominence or had mass support for the movement and so her activism was key in shaping King. Lawyer Houston helped defend black Americans, breaking down the deep rooted discriminative and segregation laws in the South. It was his ground-breaking use of sociological theories that formed the basis of the Brown v. the Board of Education 1954 that ended segregation in schools. Although compared to King, Houston is less prominent; his work was significant in reducing black discrimination gaining him the nickname ‘The man who killed Jim Crow ‘. Nonetheless had Du Bois’ NAACP not existed, Parks and Houston would never have had an organisation to support them in their fight, likewise King would never have gained the mass support for civil rights.

Trade unionist Philip Randolph 1890-1979 brought about important political changes. His pioneering use of nonviolent confrontation had a significant impact on the CRM and was widely used throughout 1950’s and 60’s. Randolph had become a prominent civil rights spokesman after organising the Brotherhood of Sleeping Car Porters in 1925, the first black majority union. Mass unemployment after the US Depression led to civil rights becoming a political issue and US trade unions supported equal rights and black membership grew. Randolph was striving for political change that would bring equality. Aware of his influence in 1941 he threatened a protest march which pressured President Roosevelt into issuing Executive Order 8802 an important early employment civil rights victory. There was a shift in the direction of the movement focussing on the military because after the Second World War black soldiers felt disenfranchised and became the ‘foot soldiers of the CRM’ fighting for equality in these mass protests. Randolph led peaceful protests which resulted in President Truman issuing Executive Order 9981 desegregating of the Armed Forces showing his key political significance. Significantly this legislation was a catalyst leading to further desegregation laws. His contribution to the CRM, support of King’s leadership and masterminding of the 1963 March made his significance equal to King’s.

King realised that US society needed to change and inspired by Ghandi he too used non-violent mass protest to bring about change, including the Greensboro Sit-ins to de-segregate lunch counters. Similarly activist Booker T Washington 1856-1915 significantly improved the lives of thousands of southern blacks who were poorly educated and trapped in poverty following Reconstruction through his pioneering work in black education. He founded the Tuskegee Institute. In his book ‘Up from Slavery: An Autobiography’ (Appendix 4) he suggests that gaining civil rights would be difficult and slow, but all blacks should work on improving themselves through education and hard work to peacefully push the movement forward. He says that “the according of the full exercise of political rights” will not be an “overnight gourdvine affair” and that a black should “deport himself modestly in regard of political claim”. Inferring that Washington wanted peaceful protest and acknowledged the time it would take to gain equality, making his philosophy like King’s. Washington’s belief in using education to gain the skills to improve lives and fight for equality is evident through the Tuskegee Institute which educated 2000 blacks a year.

The tone of the source is peaceful, calling for justice in the South. Washington uses words such as “modestly” in an attempt for peace and “exact justice” to show how he believes in equal political rights for all. The reliability of the source is mixed. Washington is subjective as he wants his autobiography to be read, understood and supported. The intended audience would have been anyone in the US, particularly blacks whom Washington wanted to inspire to protest and white politicians who would advance civil rights. The source is accurate, it was written in 1901, during the Jim Crow South. Washington would have been politically motivated in his autobiography; demanding legislative change to give blacks civil rights. There would have also been an educational factor that contributed to his writing, his Tuskegee Institute and educational philosophy, having a deep impact on his autobiography.

The source shows how and why the unequal South should no longer be segregated. Undoubtedly significant, as his reputation grew he became an important public speaker and is considered to have been a leading spokesman for black people and issues like King. An excellent role model a former slave who influenced statesmen he was the first black to dine with the President (Roosevelt) at the White House showing blacks they could achieve anything. Activist Du Bois described him as “the one recognised spokesman of his 10 million fellows … the most striking thing in the history of the American Negro”. Although not as decisive in gaining civil rights as King, Washington was important in preparing blacks for urban and working life but also empowering the next generation of activists.

Inspired by Washington the charismatic Jamaican radical activist Marcus Garvey 1880-1940 arrived in the US in 1916. Garvey had a social significance to the movement striving to better the lives of US blacks. He rose to prominence during the ‘Great Migration’ when poor southern blacks were moving to the industrial North, making Southern race problems into national ones. He founded the Universal Negro Improvement Association (UNIA) which had over 2,000,000 members in 1920. He appealed to discontented First World War black soldiers who had returned home to violent racial discrimination. The importance of the First World War was paramount in enabling Garvey to gain the vast support he did in the 1920s. Garvey published a newspaper, the Negro World which spread his ideas about education and Pan-Africanism, the political union of all people of African descent. Garvey like King gained a greater audience for the CRM, in 1920 he led an international convention in Liberty Hall, and 50,000 parade through Harlem. Garvey inspired later activists such as King.

2018-7-12-1531405547

Reflective essay on use of learning theories in the classroom: college application essay help

Over recent years teaching theories have been more common in the class room, all in the hope of supporting students and been able to further their knowledge by understanding their abilities and what they need to develop. As a teacher it is important to embed teaching and learning theories in the class room, therefore as teachers we can teach the students to their individual needs.

Throughout my research I will be looking in to the key differences of two different theories by comparing two theories used in class rooms today. I will also be critically analysing what the role of the teacher is in the life-long learning sector, by analysing the professional and legislative frameworks, as well as looking for a deeper understanding into classroom management, and why it is used and how to manage different class room environments, such as managing inclusion and how it is supported throughout different methods.

Overall, I will be linking this to my own teaching, at A Mind Apart (A Mind Apart, 2019). Furthermore, I will have the ability to understand about interaction within the classroom and why communication between fellow teachers and students is important.

The role of the teacher is known for been the forefront of knowledge. Therefore, this suggest that the role of the teacher is to pass their knowledge on to their students, known as a ‘chalk and talk’ approach, although this approach is outdated and there are various ways we now teach in the classroom. Walker believes that, ‘the modern teacher is facilitator: a person who assists students to learn for themselves’ (Reece & Walker 2002) I for one cannot say I fully believe in this approach, as all students have individual learning needs, and some may need more help than others. As the teacher, it is important to know the full capability of your learners, therefore lessons can be structure to the learner’s need. It is important for the lessons to involve active learning and discussions, these will help keep the students engaged and motivated during class. Furthermore, it is important to not only know what you want the students the be learning, but it is just as important that you know as the teacher, what you are teaching; it is important to be prepared and be fully involved in your own lesson, before you go in to any class, as a teacher I make my students my priority, therefore, I leave any personal issues outside the door so I am able to give my students the best learning environment they could possibly have; not only is it important to do this but keep updated on your subject specialism, I always double check my knowledge of my subject regularly, I find following this structure my lesson will normally run at a smooth pace.

Taking in to consideration the students I teach are vulnerable there may be minor interruptions. It is not only important that you as the teacher leave your issues at the door, but to make sure the room is free from distractions, most young adults have a lot of situations which are they find hard to deal with, which means you as the teacher are not only there to educate but to make the environment safe and relaxing for your students to enjoy learning. As teachers we not only have the responsibility of making sure the teaching takes place, but we also have the responsibilities of exams, qualifications and Ofsted; and as a teacher in the life-long learning sector it is also vital that you evaluate not only your learner’s knowledge, but you evaluate yourself as a teacher, therefore, you are able to improve your teaching strategies and keep up to date.

When assessing yourself and your students it is important not to wait until the end of a term to do this and evaluate throughout the whole term. Small assessments are a good way of doing this, it doesn’t always have to be a paper examination, you can equally you can do a quiz, ask questions, use various fun games, or even use online games such as Kahoot to help your students regain their knowledge. This will not only help you as a teacher understand your students’ abilities, but it will also help your students know what they need to work on for next term.

Alongside the already listed roles and responsibilities of being a teacher in the life-long learning sector, Ann gravels explains that,

‘Your main role as a teacher should be to teach your students in a way that actively involves and engages your students during every session’ (Gravells, 2011, p.9.)

Gravels passion is solely based on helping new teachers, gain the knowledge and information they need to become successful in the lifelong learning sector. Gravels’ has achieved this by writing various text books on the lifelong learning sector. Gravels’ states in her book ‘Preparing to teach in the lifelong learning sector’, (Gravells, 2011) the importance of the 13 legislation acts. Although I find each of them equally important as each other, I am going to mention the ones I am most likely to use during my teacher training with A Mind Apart.

Safeguarding vulnerable groups act (2006) – Working with young vulnerable adults, I find this act is the one I am most likely to use during my time with A Mind Apart. In summary, the Act explains the following: ‘The ISA will make all decisions about who should be barred from working with children and vulnerable adults.’ (Southglos.gov.uk, 2019)
The Equality act (2010) – As I will be working with different sex, race and disabilities in any teaching job which I encounter, I believe The Equality act (2010) is fundamental to mention. The Equality act 2010 covers discrimination under one legalisation.
Code of professional practice (2008) – This act covers all aspects of the activities we as teachers in the lifelong learning sector may encounter. Based around seven behaviours which are: Professional practice, professional integrity, respect, reasonable care, criminal offence disclosure, and reasonability during institute investigations.

(Gravells, 2011)

Although, all acts are equally important, those are the few acts I would find myself using regularly. I have listed the others below:

Children act (2004)
Copyright designs and patents act (1988)
Data protection act (1998)
Education and skills act (2008)
Freedom of information act (2000)
Health and safety at work act (1974)
Human rights act (1998)
Protection of children act (POCA) (1999)
The Further education teachers’ qualification regulations (2007)

(Gravells, 2011)

Teaching theories are much more common in classrooms today, however there are three main teaching theories which us as teachers are known for using in the classroom daily. Experiments show that we find the following theories work the best: behaviourism, cognitive constructivist, and social constructivist, taking these theories into consideration I will look at comparing skinners behaviourist theory and taking a look at Maslow (1987) ‘Hierarchy Of Needs’ which was introduced in 1954, and how I could use these theories in my teaching as a drama teacher in the life-long learning sector.

Firstly, looking in to behaviourism is mostly described as the teacher questioning and the student responds the way you want them to. Behaviourism is a theory, which in a way can take control of how the student acts/behaves, if used to its full advantage. Keith Pritchard (Language and Learning, 2019) describes behaviourism as ‘A theory of learning focusing on observable behaviours and discounting any mental activity. Learning is defined simply as the acquisition of a new behaviour.’ (E-Learning and the Science of Instruction, 2019).

An example of how behaviourism works, is best demonstrated through the work of Ivan Pavlov (Encyclopaedia Britannica, 2019) Pavlov was a physiologist during the start of the twentieth century and used a method called ‘conditioning’, (Encyclopaedia Britannica, 2019) which is a lot like the behaviourism theory. During Pavlov’s experiment, he ‘conditioned’ the dogs to make them salivate when they heard a bell ring, as soon as the dogs hear the bell, they associate it with getting fed. As a result of this the dogs were behaving exactly how Pavlov wanted them to behave, therefore they had successfully been ‘conditioned’. (Encyclopaedia Britannica, 2019)

During Pavlov’s conditioning experiment there are four main stages in the process of classical conditioning, these include,

Acquisition, which is the initial learning;
Extinction, meaning the dogs in Pavlov’s experiment may not respond, if no food is presented to them;
Generalisation, after learning a response, the dog may now respond to other stimuli, with no further training. For example: if a child falls off a bike, a injures their self, they may be frightened to get back on to the bike again. And lastly,
Discrimination, which is the opposite of generalisation, for example the dog will not respond in the same way to another stimulus as they did the first one.

Pritchard states ‘It Involves reinforcing a behaviour by rewarding it’ which is what Pavlov’s dog experiment does. Although rewarding behaviour can be good, it can also be negative, such as bad behaviour can be discouraged by punishment. The key aspects of conditioning are as follows: Reinforcement, Positive reinforcement, Negative reinforcement, and shaping. (Encyclopaedia Britannica, 2019)

Behaviourism is one of the learning theories I use in my teaching today, working at A Mind Apart, (A Mind Apart, 2019) I work with challenging young people. The A Mind Apart organisation, a performing arts foundation especially targeted at vulnerable and challenging young people, to help better their lives; hence, on the off chance that I use the behaviourism theory it will admirably inspire the students to do better. Using behaviourism with respect to the standard of improvement and reaction, behaviourism is driven by the teacher and is responsible for how the student will carry on and how it is finished. This theory came around in the early twentieth century and concentrated how individuals behave; with respect to the work I do at A Mind Apart, as a trainee performing arts teacher, I can identify with behaviourism limitlessly, every Thursday, when my 2 hour class is finished, I at that point take 5 minutes out of my lesson to award a ‘Star of the week’ It is an incredible method to urge students to carry on the way they have been, if behaving and influence them to endeavour towards something ion the future. Furthermore, I have discovered that this theory can function admirably in any expert subject and not just performing arts. The behaviourism theory is straightforward as it depends just on detectable conduct and portrays a few widespread laws of conduct. It’s positive and negative support strategies can be extremely effective. The students who we teach in general at A Mind Apart, are destined to come to us with emotional well-being issues, which is the reason most of the time these students find that it is hard to focus, or even learn in a school environment; we are there to give a comprehensive learning environment and utilize the time we have with them, so they can move forward at their own pace and take a leap at their scholarly aptitudes and socialising in the future when they leave us, to move on to college or even jobs, our work with them will also help them meet new individuals, and gain new useful knowledge by using behaviourism teaching theory. Despite the fact some of them may struggle with obstacles during their lives; although it is not always easy to manipulate someone in to thinking or behaving the way you do or want them to, with time, and persistence I have found that this theory can work. It is known that…

‘Positive reinforcement or rewards can include verbal feedback such as ‘That’s great, you’ve produced that document without any errors’ or ‘You’re certainly getting on well with that task’ through to more tangible rewards such as a certificate at the end’… (Information List of topics Assessment Becoming a teacher Continuing Professional Development (CPD) Embedding maths et al., 2019)

Gagne (Mindtools.com, 2019) was an American educational psychologist best known for his nine levels of learning; Regarding Gagne’s nine levels of learning, (Mindtools.com, 2019) I have done something in depth research, in just a couple of his nine levels of learning therefore I will be able to understand the levels and how his theory link to behaviourism.

Create an attention-grabbing introduction.
Inform learner about the objectives.
Stimulate recall of prior knowledge.
Create goal-centred eLearning content.
Provide online guidance.
Practice makes perfect.
Offer timely feedback.
Assess early and often.
Enhance transfer of knowledge by tying it into real world situations and applications.

(Mindtools.com, 2019)

Informing the learner of the objectives, is the one level I can relate to the most during my lessons, I find it important in many ways why you as the teacher, should let your students know what they are going to be learning during that specific lesson. This will help them have a better understanding throughout the lesson, as even more engage them from the very start. Linking it to behaviourism during my lessons, I tell my students what I want from them that lesson, and what I expect them, with their individual needs, to be learning or have learnt by the end of lesson. If I believe learning has taking place during my lesson, I will reward them with a game of their choice at the end of the lesson. In their mind they understand they must do as they are asked by the teacher, or the reward to play a game at the end of lesson, will be forfeited. As studies show, during Pavlov’s (E-Learning and the Science of Instruction, 2019) dog experiment that this theory does work, it can take a lot of work. I have built a great relationship with my students, and most of the time they are willing to work to the best of their ability.

Although Skinners’ (E-Learning and the Science of Instruction, 2019) behaviourist theory is based around manipulation, Maslow’s ‘Hierarchy Of Needs’ (Very well Mind, 2019) believes that behaviour and the way people act is based upon childhood events, therefore it is not always easy to manipulate in to the way you think, as they may have had a completely different upbringing, which will determine how they act. Maslow (Very well Mind, 2019) feels, if you remove the obstacles that stop the person from achieving, then they will have a better chance to achieve their goals; Maslow argues that there are five different needs which must be met in order to achieve this. The highest level of needs is self-actualisation which means the person must take full reasonability for their self, Maslow believes that people can go through to the highest levels, if they are in an education which can produce growth. Below is the table of Maslow’s ‘Hierarchy of needs’ (Very well Mind, 2019)

(Information List of topics Assessment Becoming a teacher Continuing Professional Development (CPD) Embedding maths et al., 2019)

In an explanation the table lets you know your learners needs throughout different levels, during their time in your learning environment, all learners may be at different levels, but should be able to progress on to the next one when they feel comfortable to do so. There may be knockbacks which your learners as individuals will face, but is the needs that will motivate the learning, although you may find that not all learners want to progress through the levels of learning at that moment in time, for example, if your learner if happy with the progress they have achieved so far and are content with life, they may find they want to stay at that certain level.

It is important to use the levels to encourage your learners by working up the table.

Stage 1 of the table is the physiological need – are your learners comfortable in the environment you are providing, are they hungry or thirsty? Your learners may even be tired; taking all these factors in to consideration, these may stop learning taking place. Therefore, it is important to meet all your learners’ physiological needs.

Moving up the table to safety and security – make your learners feel safe in an environment where they can relax, feel at ease. Are your learners worried about anything in particular? If so, can you help them overcome their worries.

Recognition – do your learners feel like they are part of the group? It is important to help those who don’t feel that they are part of the group bond with others. Help your learners belong and make them feel welcome. One recognition is in place your learners will then start to build their self-esteem, are they learning something useful, although your subject specialism may be second to none, it is important that your passion and drive shines through your teaching; overall this will result in the highest level: Self actualisation, are your learners achieving what they want to do? Make the sessions interesting and your learners will remember more about the subject in question. (Very well Mind, 2019)

Furthermore, classroom management comes in to force with any learning theory you use whilst teaching. Classroom management is made up of various techniques and skills that we as teacher utilize. Most of today’s classroom management systems are highly effective as they increase student success. As I am now a trainee teacher, I understand that classroom management can be difficult at times, therefore I am always researching different methods on how to manage my class. Although I don’t believe entirely that this comes from just methods, but if your pupils respect you as a teacher, and they understand what you expect of them whilst in your class, you should be able to manage the class fine; relating this with my placement at A Mind Apart, my students know what I expect of them and from that my classroom management is normally good…following this there are a few classroom management techniques I tend to follow:

Demonstrating the behaviour, you want to see – eye contact whilst talking, phones away in bags/coats, listen when been spoken to and be respectful of each other, these are all good codes of conduct to follow, and they are my main rules whilst in the classroom.
Celebrating hard work or achievements – When I think a student has done well, we as a group will celebrate their achievement, whether It be in education or out, a celebration always helps with classroom management.
Make your session engaging and motivating – This is something all us trainee teachers find difficult within our first year, as I have found out personally from the first couple of months, I have learnt to get to know your learners, understand what they like to do, and what activity’s keep them engaged.
Build strong relationships – I believe having a good relationship with your students is one of the key factors to managing a class room. It is important to build trust with your students, make them feel safe and let them know they are in a friendly environment.

When it comes to been in a classroom environment, not all students will adhere to this, therefore they may require a difference kind of structure to feel included. A key example of this is students with physical disabilities, you may need to adjust the tables or even move them out the way, you could also adjust the seating so a student may be able to see more clearly if they have hearing problems maybe write more down on the board, or even give them a sheet at the start of the lesson, which lets them know what you will be discussing and any further information they may need to know, not only do you need to take physical disabilities in to consideration but it is also important to cater for those who have behavioural problems, it is important to adjust the space to make your students feel safe whilst in your lesson.

Managing your class also means that sometimes you may have to adjust your teaching methods to suit all in your class and understand that it is important to incorporate cultural values. Whilst in the classroom, or even giving out home work you may need to take in to consideration that some students, especially those with learning difficulties, may take longer to do work, or even need additional help.

Conclusion

Research has given me a new insight into how many learning theories, teaching strategies and classroom management strategies there are, there are books and websites which help you achieve all the things you need to be able to do in your classroom. Looking back over this essay I looked in to the two learning theories that I am most likely to use.

2019-1-7-1546860682

Synchronous and asynchronous remote learning during the Covid-19 pandemic

Student’s Motivation and Engagement

Motivation plays an important role in student engagement. Saeed and Zyngier (2012) contend that in order to assess student motivation, researchers should also have to examine engagement in and as part of learning. This manifests that there is a relationship between student motivation and engagement. As support to this relationship, Hufton, Elliot, and Illushin (2002) believe that high levels of engagement show high levels of motivation. In other words, when the levels of motivation of students are high that is when their levels of engagement are also high.

Moreover, Dörnyei (2020) suggests that the concept of motivation is closely associated with engagement, and with this he asserted that motivation must be ensured in order to achieve student engagement. He further offered that any instructional design should aim to keep students engaged, regardless of the learning context, may it be traditional or e-learning. In addition, Lewis et al (2014) reveal that within the online educational environment, students can be motivated by delivering an engaging student-centered experience consistently.

In the context of Student-Teacher Dialectical Framework embedded with Self-Determination Theory, Reeve, J. (2012) reveal three newly discovered functions of student engagement. First, is that engagement bridges students’ motivation to highly valued outcomes. Second, is that student engagement affects the future quality of learning environment especially in the flow of instruction, the external events it has, and the teacher’s motivating style. Third, is that student engagement changes motivation, which means that engagement cause changes in motivation in the future. This highlights that student motivation is both a cause and a consequence. This assertion that engagement can cause changes motivation is embedded on the idea that students can take actions to meet their own psychological needs and enhance the quality of their motivation. Further, Reeve, J. (2012) asserts that students can be and are architects of their own motivation, at least to the extent that they can be architects of their own course-related behavioral, emotional, cognitive, and agentic engagement.

Synchronous and Asynchronous Learning

The COVID-19 pandemic brought a great disaster on the education system around the world. Schools have struggled due to the situation in which led them to cessation of classes for an extended period of time and other restrictive measures that later on impede the continuance of face-to face classes. In consequence, there is a massive change towards the educational system around the world while educational institutions strive and put their best efforts to resolve the situation. Many schools had addressed the risks and challenges in continuing education amidst the crisis by shifting conventional or traditional learning into distance learning. Distance learning is a form of education through the support of technology that is conducted beyond physical space and time (Papadopulou, 2020). Distance learning is an online education that provides opportunities towards educational advancement and learning development among learners worldwide. In order to sustain the educational goal of our country, distance learning is a new way of providing quality education as much as possible among public and private institutions especially to those pursing in higher education. The instructional delivery in considering distance education can be through synchronous or asynchronous mode of learning, in which students can engage and continually attain quality education despite of the pandemic situation.

Based on the definition of Easy LMS Company (2020), synchronous learning refers to a learning event in which a group of participants is engaged in learning at the same time (e.g., zoom meeting, web conference, real- time class) while asynchronous learning refers to the opposite, in which the instructor, the learner, and other participants are not engaged in the learning process at the same time. Thus, there is no real-time interaction with other people (e.g., pre-recorded discussions, self- paced learning, discussion boards). According to article issued by University of Waterloo (2020), synchronous learning is a form of learning that is live presentation which allows the students to ask questions while asynchronous can be a recorded presentation that allows students to have time in reflecting before asking questions. Synchronous learning is a typical meeting of students in a virtual setting and there is a class discussion where everybody can participate actively. Asynchronous learning is the utilization of learning platform or portal where the teachers or instructors can post and update lessons or activities and student can work at their own pace. These types of class instruction are commonly observed in these times and students have their own preferences when it comes to what best works for them.

In comparing both of the types of learning, it is valuable to know the advantages and disadvantages in order to see how it will really be an impact towards students. Wintemute (2021) discussed synchronous learning has greater engagement and direct communication is present, but it requires strong internet connection. On the other hand, asynchronous learning is advantageous in schedule flexibility and more accessible, yet it is less immersive and the challenges in procrastination, socialization and distraction are present. Students in synchronous learning tend to adapt the changes of learning with classmates in a virtual setting while asynchronous learning introduced a new setting where students can choose when to study.

In the middle of the crisis, asynchronous learning can be more favorable than synchronous because most of us are struggling in this pandemic. One of the principal advantages of asynchronous online learning is that it offers more flexibility, allowing learners to set their own schedule and work at their own pace (Anthony and Thomas, 2020). In contrast, synchronous learning allows students to feel connected in a virtual world and it can give them assurance of not being isolated amidst studying lessons because they can have a live interactions and exchange of ideas and other valuable inputs for the class to understand the lessons well by the help of teachers. The main advantages of synchronous learning are that instructors can explain specific concepts when students are struggling and students can also get immediate answers about their concerns in the process of learning (Hughes, 2014). In the article of Delgado (2020), the advantages and disadvantages will not be effective if they do not have a pedagogical methodology considering the technology and its optimization. Furthermore, the quality of learning depends on good planning and design by reviewing and evaluating each type of learning modality.

Synthesis

Motivating students has been a key challenge facing instructors in the contexts of online learning (Zhao et. al 2016). In which motivation is one of the bases of the student to do well in their studies. When students are motivated, the outcome is a good mark. In short, motivation is a way to pushed them study more to get high grades. According to Zhao (2016) motivation in an online learning environment revealed that there are learning motivation differences among students from different cultural backgrounds. Motivation is described as “the degree of people’s choices and the degree of effort they will put forth” (Keller, 1983). Learning is closely linked to motivation because it is an active process that necessitates intentional and deliberate effort. Educators must build a learning atmosphere in which students are highly encouraged to participate both actively and productively in learning activities if they want to get the most out of school (Stipek, 2002). John Keller (1987) in his study revealed that attention and motivation will not be maintained unless the learner believes the teaching and learning are relevant. According to Zhao (2016), a strong interest in a topic will lead to mastery goals and intrinsic motivation.

Engagement can be perceived with the interaction between students and teachers in online classes. Student engagement, according to Fredericks et al. (2004), is a meta-construct that includes behavioral, affective, and mental involvement. Despite the fact that there is a broad body of literature on behavioral (i.e., time on task), emotional (i.e., interest and value), and cognitive engagement (i.e., self-regulation and learning strategies), what sets engagement apart is its capacity as a multifaceted strategy. While there is substantial research on behavioral (i.e., time on task), emotional (i.e., interest and value), and cognitive engagement (i.e., self-regulation and learning strategies what distinguishes engagement is its ability as a multidimensional or “meta”-construct that encompasses all three dimensions.

Motivation plays an important role in student engagement. Saeed and Zyngier (2012) contend that in order to assess student motivation, researchers should also have to examine engagement in and as part of learning.

Lewis et al (2014) reveal that within the online educational environment, students can be motivated by delivering an engaging student-centered experience consistently.

In the context of Student-Teacher Dialectical Framework embedded with Self-Determination Theory, Reeve, J. (2012) reveal three newly discovered functions of student engagement. First, is that engagement bridges students’ motivation to highly valued outcomes. Second, is that student engagement affects the future quality of learning environment especially in the flow of instruction, the external events it has, and the teacher’s motivating style. Third, is that student engagement changes motivation, which means that engagement cause changes in motivation in the future. Distance learning is an online education that provides opportunities towards educational advancement and learning development among learners worldwide. In order to sustain the educational goal of our country, distance learning is a new way of providing quality education as much as possible among public and private institutions especially to those pursing in higher education. The instructional delivery in considering distance education can be through synchronous or asynchronous mode of learning, in which students can engage and continually attain quality education despite of the pandemic situation.

According to article issued by University of Waterloo (2020), synchronous learning is a form of learning that is live presentation which allows the students to ask questions while asynchronous can be a recorded presentation that allows students to have time in reflecting before asking questions. Synchronous learning is a typical meeting of students in a virtual setting and there is a class discussion where everybody can participate actively. Asynchronous learning is the utilization of learning platform or portal where the teachers or instructors can post and update lessons or activities and student can work at their own pace. These types of class instruction are commonly observed in these times and students have their own preferences when it comes to what best works for them.

In comparing both of the types of learning, it is valuable to know the advantages and disadvantages in order to see how it will really be an impact towards students. Wintemute (2021) discussed synchronous learning has greater engagement and direct communication is present, but it requires strong internet connection. On the other hand, asynchronous learning is advantageous in schedule flexibility and more accessible, yet it is less immersive and the challenges in procrastination, socialization and distraction are present.

In the middle of the crisis, asynchronous learning can be more favorable than synchronous because most of us are struggling in this pandemic. One of the principal advantages of asynchronous online learning is that it offers more flexibility, allowing learners to set their own schedule and work at their own pace (Anthony and Thomas, 2020). In contrast, synchronous learning allows students to feel connected in a virtual world and it can give them assurance of not being isolated amidst studying lessons because they can have a live interactions and exchange of ideas and other valuable inputs for the class to understand the lessons well by the help of teachers.

2022-1-8-1641647078

‘Peak Oil’ – what are the solutions?

The ability to harness energy sources and put them towards a productive use has played a crucial role in economic development worldwide. Easily accessible oil helped to fuel continued expansion in the 20th century. Agricultural production was transformed by motorised farm equipment and petroleum-based fertilisers and pesticides. Cars, trucks and airplanes powered by oil products revolutionised the transportation of people and goods. Oil provides fuel for home heating, electricity production, and to power industrial and agricultural equipment. It also provides the source material for the construction of plastics, many fertilisers and pesticides and many industrial chemicals and materials. It is now difficult to find any product that does not require the use of oil at some point in the production process.

Oil has several advantages over other fossil fuels: it is easily transportable and energy-dense, and when refined it is suitable for a wide variety of uses. Considering the important role that oil plays in our economy, if persistent shortages were to emerge, the economic implications could be enormous. However, there is no consensus as to how seriously the treat of oil resources depletion should be taken. Some warn of a colossal societal collapse in the not-too-distant future, while others argue that technological progress will allow us to shift away from oil before resource depletion becomes an issue.

How much of a problem oil depletion poses depends on the amount of oil that remains accessible at reasonable cost, and how quickly the development of alternatives allows the demand for oil to be reduced. This is what the term ‘peak oil’ means the point of when the demand for oil outstrips the availability. Demand and supply each evolve over time following a pattern that is based in historical data, while supply is also constrained by resource availability. There is no mechanism for market on its own to address concerns about climate change. However, if policies are put in place to price the costs of climate change into the price of fossil fuel consumption, then this should trigger market incentives that should lead efficiently to the desired emission reductions.

A while ago the media was filed with stories about peak oil and it was even in an episode of the Simpsons. Peak oil in basic term means that the point we have used all the easy to extract oil and are only left with the hard to reach which in term is expensive to refine. There is still a huge amount of debate amongst geologist and Petro- industries experts about how much oil is left in the ground. However, since then the idea of a near-term peak in the world oil supplies has been discredited. The term that is now used is Peak Oil demand, the idea is that because of the proliferation of electric cars and other sources of energy means that demand for oil will reach a maximum and start to decline and indeed consumptions levels in some parts of the world have already begun to stagnate.

The other theory that has been produce is that with supply beginning to exceed demand there is not enough investment going into future oil exploration and development. Without this investment production will decline but production is not declining due to supply problems just that we are moving into an age of oil abundance and the decline in oil production seen if because of other factors. There has been an explosion of popular literature recently predicting that oil production will peak soon, and that oil shortages will force us into major lifestyle changes in the near future- a good example of this is Heinberg (2003). The point at which oil production reaches a peak and begins to decline permanently has been referred to as ‘Peak Oil’. Predictions for when this will occur range from 2007 and 2025 (Hirsch 2005)

The Hirsch Report of 2005 concluded that it would take a modern industrial nation such as the UK or the United States at least a full decade to prepare for peak oil. Since 2005 there has been some movement towards solar and wind power together with more electric cars but nothing that deals with the scale of the problem. This has been compounded by Trump coming to power in the United States and deciding to throw the energy transition into reverse, discouraging alternative energy and expanding subsidies for fossil fuels.

What is happening how

Many factors are reported in news reports to cause changes in oil prices: supply disruptions from wars and other political factors, from hurricanes or from other random events; changes in demand expectations based on economic reports, financial market events or even weather in areas where heating oil is used; changes in the value of the dollar; reports of inventory levels, etc. these are all factors that will affect the supply and demand for oil, but they often influence the price of oil before they have any direct impact on the current supply or demand for crude oil. Last year, the main forces pushing the oil market higher were the agreement by OPEC and its partners to lower production and the growth of global demand. This year, an array of factors are pressuring the oil markets: the US sanctions that threaten to cut Iranian oil production from Venezuela. Moreover, there are supply disruptions in Libya, the Canadian tar sands, Norway and Nigeria that add to the uncertainties as does erratic policymaking in Washington, complete with threats to sell off part of the US strategic reserve and a weaker dollar. Goldman Sachs continues to expect that Brent Crude prices could retest $80 a barrel this year, but probably only late in 2018. “production disruptions and large supply shifts driven by US political decisions are the drivers of this new volatility, with demand remaining robust so far” Brent Crude is expected to trade in the $70-$80 a barrel range in the immediate future.

The OPEC

Saudi Arabia-and Russia-had started to raise production even before the 22 June 2018 meeting with OPEC that sought to address the shrinking global oil supply and rising prices. OPEC had over-complying with the cuts agreed to at the November 2016 meeting thanks to additional cuts from Saudi Arabia and Venezuela. The June 2018 22nd meeting decided to increase production to more closely reflect the production cut agreement. After the meeting, Saudi Arabia pledged a “measurable” supply boost but gave no specific numbers. Tehran’s oil minister warned his Saudi Arabian counterpart that the June 22nd revision to the OPEC supply pact do not give member countries the right to raise oil production above their targets. The Saudis, Russia and several of the Gulf Arab States increased production in June but seem reluctant to expand much further. During the summer months, the Saudis always need to burn more raw crude in their power station to combat the very high temperatures of their summer.

US Shale oil production

According to the EIA’s latest drilling productivity Report, US unconventional oil production is projected to rise by 143,000 b/d in August to 7.470 billion b/d. The Permian Basin is seen as far outdistancing other shale basins in monthly growth in August, at 73,000 b/d to 3,406 million b/d. However, drilled but uncompleted (DUC) wells in the Permian rose 164 in June to 3,368, one of the largest builds in recent months. Total US DUCs rose by 193 to 7,943 in June. US energy companies last week cut oil rigs the most in a week since March as the rate of growth had slowed over the past month or so with recent declines in crude prices. Included with other optimistic forecast for US shale oil was the caveat that the DUC production figures are sketchy as current information is difficult for the EIA to obtain with little specific data being provided to Washington by E&Ps or midstream operators. Given all the publicity surrounding constraints on moving oil from the Permian to market, the EIA admits that it “may overestimate production due to constraints.”

The Middle East and North Africa

Iran

Iran’s supreme leader, Ayatollah Ali Khamenei, called on state bodies to support the government of president Hassan Rouhani in fighting US economic sanctions. The likely return of US economic sanctions has triggered a rapid fall of Iran’s currency and protests by bazaar traders usually loyal Islamist rulers, and a public outcry over alleged price gouging and profiteering. The speech to member of Rouhani’s cabinet is clearly aimed at the conservative elements in the government who have been critical of the President and his policies of cooperation with the West and a call for unity in a time that seems likely to be one of great economic hardship spread to more than 80 Iranian cities and towns. At least 25 people died in the unrest, the most significant expression of public corruption, but the protest took on a rare political dimension, with growing number of people calling on supreme leader Khamenei to step down. Although there is much debate over the effectiveness of the impending US sanctions, some analysts are saying that Iran’s oil exports could fall by as much as two-thirds by the end of the year putting oil markets under massive strain amid supply outages elsewhere in the world. Some of the worst-case scenarios are forecasting a drop to only 700,000 b/d with most of Tehran’s exports going to China, and smaller chares going to India, Turkey and other buyers with waivers. China, the biggest importer of Iranian oil at 650,000 b/d according to Reuters trade flow data, is likely to ignore US sanctions.

Iraq

Iraq’s future is again in trouble as protests erupt across the country. These protests began in southern Iraq after the government was accused of doing nothing to alleviate a deepening unemployment crisis, water and electricity shortages and rampant corruption. The demonstrations are spreading to major population centers including Najaf and Amirah, and now discontent is stirring in Baghdad. The government has been quick to promise more funding and investment in the development of chronically underdeveloped cities, but this has done little to quell public anger. Iraqis have heard these promises countless times before, and with a water and energy crisis striking in the middle of scorching summer heat, people are less inclined to believe what their government says. The civil unrest had begun to diminish in southern Iraq, leaving the country’s oil sector shaken but secure-though protesters have vowed to return. Operations at several oil fields have been affected as international oil companies and service companies have temporality withdrawn staff from some areas that saw protests. The government claims that the production and exporting oil has remained steady during the protests. With Iran refusing to provide for Iraq’s electricity needs, Baghdad has now also turned to Saudi Arabia to see if its southern Arab neighbor can help alleviate the crises it faces.

Saudi Arabia

The IPO has been touted for the past two years as the centerpiece of an ambitious economic reform program driven by crown prince Mohammed bin Salman to diversify the Saudi economy beyond oil. Saudi Arabia expects its crude exports to drop by roughly 100,000 b/d in August as the kingdom tries to ensure it does not push oil into the market beyond its customers’ needs.

Libya

Reopened its eastern oil ports and started to ramp up production from 650,000 to 700,000 and is expected to rise further after shipments resume at eastern ports that re-opened after a political standoff.

China

China’s economy expanded by 6.7 percent its slowest pace since 2016. The pace of annual expansion announced is still above the government’s target of “about 6.5 percent” growth for the year, but the slowdown comes as Beijing’s trade war with the US adds to headwinds from slowing domestic demand. The gross domestic product had grown at 6.8 percent in the previous three quarters. Higher oil prices play a role in the slowing of demand, but the main factor is higher taxes on independent Chinese refiners, which is already cutting into the refining margins and profits of the ‘teapots’ who have grown over the past three years to account fir around fifth of China’s total crude imports. Under the stricter tax regulations and reporting mechanisms effective 1 March, however, the teapots now can’t avoid paying a consumption tax on refined oil products sales- as they did in the past three years- and their refining operations are becoming less profitable.

Russia

Russia oil production rose by around 100,000 b/d from May. From July 1-15 the country’s average oil output was 11.215 million b/d an increase of 245,000 b/d from May’s production. Amid growing speculation that President Trump will attempt to weaken US sanctions on Russia’s oil sector, US congressional leaders are pushing legislation to strengthen sanctions on Russian export pipelines and joint ventures with Russian oil and natural gas companies. Ukraine and Russia said they would hold further European Union-mediated talks on supplying Europe with Russian gas, in a key first step towards renewing Ukraine’s gas transit contract that expires at the end of next year.

Venezuela

Venezuela’s Oil Minister Manuel Quevedo has been talking about plans to raise the country’s crude oil production in the second half of the year. However, no one else thinks or claims that Venezuela could soon reverse its steep production decline which has seen it losing more than 40,000 b/d of oil production every month for several months now. According to OPEC’s secondary sources in the latest Monthly Oil Market Report, Venezuela’s crude oil production dropped in June by 47,500 b/d from May, to average 1.340 million b/d in June. During a collapsing regime, widespread hunger, and medical shortages, President Nicolas Maduro continues to grant generous oil subsidies to Cuba. It is believed that Venezuela continues to supply Cuba with around 55,000 barrels of oil per day, costing the nation around $1.2 billion per year.

Alternatives to Oil

In its search for secure, sustainable and affordable supplies of energy, the world is turning its attention to unconventional energy resources. Shale gas is one of them. It has turned upside down the North-American gas markets and is making significant strides in other regions. The emergence of shale gas as a potentially major energy source can have serious strategic implications for geopolitics and the energy industry.

Uranium and Nuclear

The nuclear industry has a relatively short history: the first nuclear reactor was commissioned in 2945. Uranium is the main source of fuel for nuclear reactors. Worldwide output of uranium has recently been on the rise after a long period of declining production caused by uranium resources have grown by 12.5% since 2008 and they are sufficient for over 100 years of supply based on current requirements.

Total nuclear electricity production has been growing during the past two decades and reached an annual output of about 2,600TWh by mid-2000s, although the three major nuclear accidents have slowed down or even reversed its growth in some countries. The nuclear share of total global electricity production reached its peak of 17% by the late 1980s, but since then it has been falling and dropped to 13.5% in 2012. In absolute terms, the nuclear output remains broadly at the same level as before, but its relative share in power generation has decreased, mainly due to Fukushima nuclear accident.

Japan used to be one of the countries with high share of nuclear (30%) in its electricity mix and high production volumes. Today, Japan has only two of its 54 reactors in operation. The rising costs of nuclear installations and lengthy approval times required for new construction have had an impact on the nuclear industry. The slowdown has not been global, as new countries, primarily in the rapidly developing economies in the Middle East and Asia, are going ahead with their plans to establish a nuclear industry.

Hydro Power

Hydro power provides a significant amount of energy throughout the world and is present in more than 100 countries, contributing approximately 15% of the global electricity production. The top 5 largest markets for hydro power in terms of capacity are Brazil, Canada, China, Russia and the United States of America. China significantly exceeds the other, representing 24% of global installed capacity. In several other countries, hydro power accounts for over 50% of all electricity generation, including Iceland, Nepal and Mozambique for example. During 2012, an estimated 27-30GW of new hydro power and 2-3GW of pumped storage capacity was commissioned.

In many cases, the growth in hydro power was facilitated by the lavish renewable energy support policies and CO2 penalties. Over the past two decade the total global installed hydro power capacity has increased by 55%, while the actual generation by 21%. Since the last survey, the global installed hydro power capacity has increased by 8%, but the total electricity produced dropped by 14%, mainly due to water shortages.

Solar PV

Solar energy is the most abundant energy resource and it is available for use in its direct (solar radiation) and indirect (wind, biomass, hydro, ocean etc.) forms. About 60% of the total energy emitted by the sun reaches the Earth’s surface. Even if only 0.1% of this energy could be converted at an efficiency of 10%, it would be four times larger than the total world’s electricity generating capacity of about 5,000GW. The statistics about solar PV installations are patchy and inconsistent. The table below presents the values for 2011 but comparable values for 1993 are not available.

The use of solar energy is growing strongly around the world, in part due to the rapidly declining solar panel manufacturing costs. For instance, between 2008-2011 PV capacity has increased in the USA from 1,168MW to 5,171MW, and in Germany from 5,877MW to 25,039MW. The anticipated changes in national and regional legislation regarding support for renewables is likely to moderate this growth.

Conclusion

The rapid consumption of fossil fuels has contributed to environmental damage, the use of these fuels including oil releases chemicals that contribute to smog, acid rain, mercury contamination and carbon dioxide emissions from fossil fuel consumption are the main drivers of climate change, the effects of which are likely to become more and more severe as temperature rise. The depletion of oil and other fossil resources leaves less available to future generations and increases the likelihood of price spikes if demand outpaces supply.

One of the most intriguing conclusions from this idea is that this new “age of abundance” could alter behavior from oil producers. In the past some countries (notably OPEC members) restrained output husbanding resources for the future, betting that scarcity would increase the value of their holdings over time. However, if a peak in demand looms just over the horizon, oil producers could rush to maximize their production in order to get as much value for their reserves while they can. Saudi oil minister Sheikh Ahmed Zaki Yamani was famously quoted as saying, “the Stone Age didn’t end for lack of stone, and the oil age will end long before the world runs out of oil.” This quote reflects the view that the development of new technologies will lead to a shift away from oil consumption before oil resources are fully depleted. Nine of the ten recessions between 1946 and 2005 were preceded by spikes in oil prices and the latest recession followed the same pattern.

Extending the life of oil fields, let alone investing in new ones, will require large volumes of capital, but that might be met with skepticism from wary investors when demand begins to peak. It will be difficult to attract investment to a shrinking industry, particularly if margins continued to get squeezed. Peak demand should be an alarming prospect for OPEC, Russia and the other major oil producing countries. Basically, any and all oil producers who will find themselves fighting more aggressively for a shrinking market.

The precise data at which oil demand hits a high point and then enters into decline has been the subject of much debate, and a topic that has attracted a lot of interest just in the last few years. Consumption levels in some parts of the world have already begun to stagnate, and more and more automakers have begun to ratchet up their plans for electric vehicles. But the exact date the world will hit peak demand misses the whole point. The focus shouldn’t be on the date at which oil demand peaks, but rather the fact that the peak is coming. In other words, oil will be less important when it comes to fueling the global transportation system, which will have far-reaching consequences for oil producers and consumers alike. The implications of a looming peak in oil consumptions are massive. Without an economic transformation, or at least serious diversification, oil-producing nations that depend on oil revenues for both economic growth and to finance public spending, face an uncertain future.

2018-9-21-1537537682

Water purification and addition of nutrients as disaster relief: college application essay help

1. Introduction

1.1 Natural Disasters

Natural disasters are naturally occurring events that threaten human lives and causes damage to property. Examples of natural disasters include hurricanes, tsunamis, earthquakes, volcanic eruptions, typhoons, droughts, tropical cyclones and floods. (Pask, R., et al (2013)). They are inevitable and oftentimes, can cause calamitous implications such as water contamination and malnutrition, especially to developing countries like the Philippines, which is particularly prone to typhoons and earthquakes. (Figure 1)

Figure 1 The global distribution of natural disaster risk (The United Nations University World Risk Index 2014)

1.1.1 Impacts of Natural Disaster

The globe faces impacts of natural disasters on human lives and economy on an astronomical scale. According to a 2014 report by the United Nations, since 1994, 4.4 billion people have been affected by disasters, which claimed 1.3 million lives and cost US$2 trillion in economic losses. Developing countries are more likely to suffer a greater impact from natural disasters than developed countries as natural disasters affect the number of people living below the poverty line, and increase their numbers by more than 50 percent in some cases. Moreover, it is expected that by 2030, up to 325 million extremely poor people will live in the 49 most hazard-prone countries. (Child Fund International. (2013, June 2)) Hence, it necessitates the need for disaster relief to save the lives of those affected, especially those in developing countries such as the Philippines.

1.1.2 Lack of access to clean water

After a natural disaster strikes, severe implications such as water contamination occurs.

Besides, natural disasters know no national borders of socioeconomic status. (Malam, 2012) For example, Hurricane Katrina, which struck New Orleans, a developed city, destroyed 1,200 water systems, and 50% of existing treatment plants needed rebuilding afterwards. (Copeland, 2005) This led to the citizens of New Orleans having a shortage of drinking water. Furthermore, after the 7.0 magnitude earthquake that struck Haiti, a developing country, in 2012, there was no plumbing left underneath Port-Au-Prince, and many of the water tanks and toilets were destroyed. (Valcárcel, 2010) These are just some of the many scenarios of can bring about water scarcity.

The lack of preparedness to prevent the destruction caused by the natural disaster and the lack of readiness to respond claims to be the two major reasons for the catastrophic results of natural disasters. (Malam, 2012) Hence, the aftermath of destroyed water systems and a lack of water affect all geographical locations regardless of its socioeconomic status.

1.2 Disaster relief

Disaster relief organisations such as The American Red Cross help countries that are recovering from natural disasters by providing these countries with the basic necessities.

After a disaster, the Red Cross works with community partners to provide hot meals, snacks and water to shelters or from Red Cross emergency response vehicles in affected neighborhoods. (Disaster Relief Services | Disaster Assistance | Red Cross.)

The International Committee of the Red Cross/Red Crescent (ICRC) reported that its staff had set up mobile water treatment units. These were used to distribute water to around 28,000 people in towns along the southern and eastern coasts of the island of Samar, and to other badly-hit areas including Basey, Marabut and Guiuan. (Pardon Our Interruption. (n.d.))

Figure 2: Children seeking help after a disaster(Pardon Our Interruption. (n.d.))

Figure 3: Massive Coastal Destruction from Typhoon Haiyan (Pardon Our Interruption. (n.d.))

1.3 Target audience: Tacloban, Leyte, The Philippines

As seen in figures 4 and 5, Tacloban is the provincial capital of Leyte, a province in the Visayas region in the Philippines. It is the most populated region in the Eastern Visayas region, with a total population of 242,089 people as of August 2015. (Census of Population, 2015)

Figure 4: Location of Tacloban in the Philippines (Google Maps)

Figure 5: Location of Tacloban in the Eastern Visayas region (Google Maps)

Due to its location on the Pacific Ring of Fire (Figure 6), more than 20 typhoons (Lowe, 2016) occur in the Philippines each year.

Figure 6: The Philippines’ position on the Pacific Ring of Fire (Mindoro Resources Ltd., 2004)

In 2013, Tacloban was struck by Super Typhoon Haiyan, locally known as ‘Yolanda’. The Philippine Star, a local digital news organisation, reported more than 30,000 deaths from that disaster alone. (Avila, 2014) Tacloban is in shambles after Typhoon Haiyan and requires much aid to restore the affected area, especially when the death toll is a whopping five figure amount.

1.4 Existing measures and their gaps

Initially, there was a slow response of the government to the disaster. For the first three days after the typhoon hit, there was no running water and dead bodies were found in wells. In desperation for water to drink, some even smashed pipes of the Leyte Metropolitan Water District. However, even when drinking water was restored, it was contaminated with coliform. Many people thus became ill and one baby died of diarrhoea. (Dizon, 2014)

Long response-time by the government, (Gap 1) and further consequences were borne by the restoration of water brought (Gap 2). The productivity of people was affected and hence there is an urgent need for a better solution to the problem of late restoration of clean water.

1.5 Reasons for Choice of Topic

There is high severity since ingestion of contaminated water is the leading cause of infant mortality and illness in children (International Action, n.d.) and more than 50% of the population is undernourished. (World Food Programme, 2016). Much support and humanitarian aid has been given by organisations such as World Food Programme and The Water Project, yet more efforts are needed to lower the death rates, thus showing the persistency. It is also an urgent issue as malnourishment mostly leads to death and the children’s lives are threatened.

Furthermore, 8 out of 10 of the world’s cities most at risk to natural disasters are in the Philippines. (Reference to Figure _)Thus, the magnitude is huge as there is high frequency of natural disasters. While people are still recovering from the previous one, another hit them, thus worsening the already severe situation.

Figure _ Top 5 Countries of World Risk Index of Natural Disasters 2016 (Source: UN)

WWF CEO Jose Maria Lorenzo Tan said that “on-site desalination or purification” would be a cheaper and better solution to the lack of water than shipping in bottled water for a long period of time. (Dizon, 2014) Instead of relying on external humanitarian aid, which might incur a higher amount of debt as to relying on oneself for water, this can cushion the high expenses of rebuilding their country. Hence, there is a need for a water purification plant that provides potable water immediately when a natural disaster strikes. The plant will also have to provide cheap and affordable water until water systems are restored back to normal.

Living and growing up in Singapore, we have never experienced natural disasters first hand. We can only imagine the catastrophic destruction and suffering that accompanies natural disasters. With “Epione Solar Still” (named after the greek goddess of the Soothing of Pain), we hope to be able to help many Filipinos access clean and drinkable water, especially children who clearly do not deserve to experience such tragedy and suffering.

1.6 Case study: Disaster relief in Japan

Located at the Pacific Ring of Fire, Japan is vulnerable to natural disasters such as earthquakes, tsunami, volcanic eruptions, typhoons, floods and mudslides due to its geographical location and natural conditions. (Japan Times, 2016)

In 2011, an extremely high 9.0 magnitude earthquake hit Fukushima, causing a tsunami that destroyed the northeast coast and killed 19,000 people. It was the worst-hit earthquake in Japan in history, and it damaged the Fukushima plant and caused nuclear leakage, leading to contaminated water which currently exceeds 760,000 tonnes. (The Telegraph, 2016) The earthquake and tsunami caused a nuclear power plant to fail, and radiation to leak into the ocean and escape into the atmosphere. Many evacuees have still not returned to their homes, and, as of January 2014, the Fukushima nuclear plant still poses a threat, according to status reports by the International Atomic Energy Agency. (Natural Disasters & Pollution | Education – Seattle PI. (n.d.))

Disaster Relief

In the case of major disasters, the Japan International Cooperation Agency (JICA) deploys Japan Disaster Relief (JDR) teams, consisting of the rescue, medical, expert and infectious disease response teams and also the Self-Defence Force (SDF) to provide relief aid to affected countries. It provides emergency relief supplies such as blankets, tents and water purifiers and some are also stockpiled as reserved supplies in places closer to disastrous areas in case disasters strike there and emergency disaster relief is needed. (JICA)

For example during the Kumamoto earthquake in 2016, 1,600 soldiers had joined the relief and rescue efforts. Troops were delivering blankets and adult diapers to those in shelters. With water service cut off in some areas, residents were hauling water from local offices to their homes to flush toilets. (Japan hit by 7.3-magnitude earthquake | World news | The Guardian. (2016, April 16))

Solution to Fukushima water contamination

Facilities are used to treat contaminated water. The main one is the Multi-nuclide Removal Facility (ALPS) (Figure _), which could remove most radioactive materials except Tritium. (TEPCO, n.d)

Figure _: Structure of Multi-nuclide Removal Facility (ALPS) (TEPCO, n.d)

1.7 Impacts of Case Study

The treatment of contaminated water is very effective as more than 80% of contaminated water stored in tanks has been decontaminated and more than 90% of radioactive materials has been removed during the process of decontamination by April 2015. (METI, 2014)

1.8 Lessons Learnt

Destruction caused by natural disasters results in a lack of access to clean and drinkable water (L1)

Advancements in water purification technology can help provide potable water for the masses. (L2)

Natural disasters weaken immune systems, people are more vulnerable to the diseases (L3)

1.9 Source of inspiration

Suny Clean Water’s solar still, is made with cheap material alternatives, which would help to provide more affordable water for underprivileged countries.

A fibre-rich paper is coated with carbon black(a cheap powder left over after the incomplete combustion of oil or tar) and layered over each section of a block of polystyrene foam which is cut into 25 equal sections. The foam floats on the untreated water, acting as an insulating barrier to prevent sunlight from heating up too much of the water below. Then, the paper wicks water upward, wetting the entire top surface of each section. This causes a clear acrylic housing to sit atop the styrofoam. (Figure _)

Figure _: How fibre-rich paper coated with carbon black is adapted into the solar still. (Sunlight-powered purifier could clean water for the impoverished | Science | AAAS. (2017, February 2)

It is estimated that the materials needed to build it cost roughly $1.60 per square meter, compared with $200 per square meter for commercially available systems that rely on expensive lenses to concentrate the sun’s rays to expedite evaporation.

1.10 Application of Lessons Learnt

Gaps in current measures

Learning points

Applications to project

Key features in proposal

Developing countries lack the technology / resources to treat their water and provide basic necessities to their people.

Advanced technology can provide potable water readily. (L2)

Need for technology to purify contaminated water.

Solar Distillation Plant

Even with purification of water, problem of malnutrition which is worsened by natural disasters, is still unsolved.

Solution to provide vitamins to young children to boost immunity and lower vulnerability to diseases and illnesses. (L3)

Need for nutrient-rich water.

Nutrients infused into water using concept of osmosis.

Even with the help of external organisations, less than 50% of households have access to safe water.

Clean water is still inaccessible to some people. (L1)

Increase accessibility to water.

Evaporate seawater (abundant around Phillipines) in solar still. (short-term solution)

Figure _: Table of application of lessons learnt

2. Project Aim and Objectives

2.1 Aim

Taking into account the loopholes that exist in current measures adopted to improve water purification to reduce water pollution and malnutrition in Ilocos Norte, our project proposes a solution to provide Filipinos with clean water by creating an ingenious product, the Epione Solar Still. The product makes use of natural occurrences (evaporation of water), and adapts and incorporates the technology and mechanism behind the kidney dialysis machine to provide Filipinos with nutrient-enriched water without polluting their environment. The product will be located near water bodies where seawater is abundant to act as a source of clean water to the Filipinos.

2.2 Objectives of Project

To operationalise our aim, our objectives are to:

Design “Epione Solar Still”

Conduct interviews with:

Masoud Arfand, from Department of Mechanical Engineering, Najafabad Branch, Islamic Azad University to determine the projected percentage of water that Epione Solar Still can produce and the number of people it can provide for.

Qiaoqiang Gan, electrical engineer from Sunny Clean Water (his team innovated the technology of using fibre-rich paper is coated with carbon black to make process of water purification using the soalr still faster and more cost-friendly) to determine amount of time Epione Solar Still needed to produce sufficient water needed to support Fillipinos in Tacloban, Leyte as Epione Solar Still is a short-term disaster relief solution.

Dr Nathan Feldman, Co-Founder of HopeGel, of EB Performance, LLC to determine significant impact of nutrients-infused water to boost immunity of victims of natural disaster. (Project Medishare, n.d)

Review the mechanism and efficiency of using a solar still to source clean and nutrient-rich water for Filipinos.

3. Project Proposal

Investment into purification of water contamination in the form of disaster relief, which can provide Filipinos with nutrients to boost their immunity in times of disaster and limit the number of deaths that occur due to the consumption of contaminated water during a crisis.

3.1 Overview of Project

Our group proposes to build a solar distillation plant (Figure _) within a safe semi-underground bunker. The bunker will contain a generator to power certain parts of the plant. Then, seawater will be fed into the still via underground pipes from the sea surrounding the southern part of Tacloban. The purified water produced by the distillation process will be infused with nutrients to boost the immunity of disaster victims once consumed. Hence, not only will our distillation plant be able to produce potable water, it will also be nutritious so as to boost victims’ immunity in times of natural calamities. Potable water will then be distributed in drums and shared among Filipinos using .

Figure _: Mechanism of our solar distillation plant, Epione Solar Still

3.2 Phase 1: Water Purification System

3.2.1 Water extraction from the sea

Still is located near the sea where seawater is abundant. Seawater is extracted from low-flow open sea (Figure _) and then pumped into our solar still.

Figure _: Intake structure of seawater (Seven Seas Water Corporation, n.d.)

3.2.2 Purification of Seawater

Solar energy heats up the water in the solar still. The water evaporates, and condenses on the cooler glass surface of the ceiling of the still. Pure droplets of water slide down the glass and into the collecting basin, where nutrients will diffuse into the water.

Figure 6: Mechanism of Epione Solar Still

3.3 Phase 2: Nutrient Infuser

Using the concept of reverse osmosis (Figure _), a semi permeable membrane separates the nutrients and newly purified water, allowing the vitamins and minerals to diffuse into the condensed water. The nutrient-infused water will be able to provide nourishment, thus making the victims of natural disaster less vulnerable and susceptible to illnesses and diseases due to a stronger immune system. This will help the Filipinos in Tacloban, Leyte quickly get back on their feet after a natural disaster and minimise the death toll as much as possible after a natural disaster befalls.

Figure _: How does reverse osmosis work (Water Filter System Guide, n.d.)

Nutrient / Mineral

Function

Upper Tolerable Limit (The highest amount that can be consumed without health risks)

Vitamin A

Helps to form and maintain healthy teeth, bones, soft tissue, mucus membranes and skin.

10,000 IU/day

Vitamin B3 (Niacin)

Helps maintain healthy skin and nerves

Has cholesterol-lowering effects

35 mg/day

Vitamin C

(Ascorbic acid, an antioxidant)

Promotes healthy teeth and gums.

Helps the body absorb iron and maintain healthy tissue.

Promotes wound healing.

2,000 mg/day

Vitamin D

(Also known as “sunshine vitamin”, made by the body after being in the sun).

Helps body absorb calcium.

Helps maintain proper blood levels of calcium and phosphorus

1,000 micrograms/day (4,000 IU)

Vitamin E

(Also known as tocopherol, an antioxidant)

Plays a role in formation of red blood cells.

1,500 IU/day

Figure _: Table of functions and amount of nutrients that will be diffused into our Epione water. (WebMD, LLC, 2016)

3.4 Phase 3: Distribution of water to households in Tacloban, Leyte

Potable water will be collected into drums (Figure _) of 100 litres in capacity each, which would suffice 50 people since the average intake of water is 2 litres per person per day. These drums will then be distributed to the tent cities in Tacloban, Leyte, our targeted area, should a natural disaster befall. Thus, locals will get potable water within their reach, which is extremely crucial for their survival in times of natural calamities.

Figure _: Rain barrels will be used to store the purified and nutrient-infused water (Your Easy Garden, n.d.)

3.5 Stakeholders

3.5.1 The HopeGel Project

HopeGel is a nutrient and calorie-dense protein gel designed to aid children suffering from malnutrition caused by severe food insecurity brought upon by draughts (Glenroy Inc., 2014). HopeGel has been distributed in Haiti where malnutrition is the number one cause of death among children under five mainly due to the high frequency of natural disasters that has caused much destruction to the now impoverished state of Haiti. (Figure _) The implementation of Epione Solar Still by this company helps it achieve its objective to address the global issue of severe acute malnutrition in children as most victims of natural disasters lack the nourishment they need (HopeGel, n.d.)

Figure _: HopeGel, a packaged nutrient and calorie-dense protein gel (Butschli, HopeGel, n.d.)

3.5.2 Action Against Hunger (AAH)

Action Against Hunger is a relief organisation that develops and carries out programme for countries in need regarding nutrition, health, water and food security (Action Against Hunger, n.d) (Figure _). AAH also provides programs to be better prepared for disasters which aims to anticipate and prevent humanitarian crisis (GlobalCorps, n.d.) With 40 years of expertise, helping 14.9 million people across more than 45 countries, AAH is no stranger to humanitarian crises. The implementation of Epione Solar Still by this company helps it achieve its aim of saving lives by extending help to Fillipinos in Tacloban, Leyte suffering from deprivation of a basic need due to water contamination caused by disaster relief through purifying and infusing nutrients into seawater.

Figure _: Aims and Missions of Action Against Hunger (AACH, n.d.)

2017-7-11-1499736147

Analyse the use of ICTS in a humanitarian emergency

INTRODUCTION

The intention of writing this essay is to analyse the use of ICTS in a humanitarian emergency. The specific case study we have discuss in this essay is Multi-level functionality of social media in the aftermath of the Great East Japan Earthquake written by Jung, J., and Moro, M. 2014. This report emphasis on the benefits of social media networks like twitter and face book can be used to spread and gather important information in emergency situations rather than solely utilised as a social network platform. ICTs has changed the way humans gather information during the disasters and social media specially twitter became important source of information in these disasters.

Literature Review

The case studies of using ICTs in a humanitarian emergency can have either technically rational perspective or socially embedded perspective. Technically rational perspective means what to do and how to achieve the given purpose, it is a prescription for design and action. Socially embedded means it focuses on the particular case and process of work is affected by the culture, area and human nature. In this article, we have examined different humanitarian disasters cases in which ICTS played a vital role to see if author consider technically rational perspective or socially embedded perspective.

In the article “Learning from crisis: Lessons in human and information infrastructure from the World Trade Centre response” by (Dawes, Cresswell et al. 2004) author adopts technical/rational perspective. 9/11 was very big incident and no one was ready to deal with this size of attack but as soon as it happened procedure start changing rapidly. Government, NGO and disaster response unit start learning and made new prescription, which can be used universally and in any size of disaster. For example, the main communication structure was damaged which was supplied by Verizon there were different communication suppliers suppling their services but they all were using the physical infrastructure supplied by Verizon. So VOIP was used for communication between government officials and in EOC building. There were three main areas where the problems were found and then new procedure adopt in the response of disaster. The three main areas were technology, information and inter layered relationships between the Ngo’s, Government and the private sector. (Dawes, Cresswell et al. 2004).

In the article “Challenges in humanitarian information management and exchange: Evidence from Haiti,” (Altay, Labonte 2014) author adopts socially embedded perspective. Haiti earthquake was one of the big disaster killing 500000 people and displacing at least 2 million. Around 2000 organisation went in for help but there was no coordination between NGO`s and government for the humanitarian response. Organisation didn’t consider local knowledge they assumed that there is no data available. All the organisations had different standards and ways to do work so no one followed any prescription. Technical aspect of HIME (humanitarian information management and exchange) wasn’t working because all the members of humanitarian relief work wasn’t sharing any humanitarian information. (Altay, Labonte 2014)

In the article, Information systems innovation in the humanitarian sector,” Information Technologies and International Development” (Tusiime, Byrne 2011) author adopts socially embedded perspective. Local staff was hired. They didn’t have any former experience or knowledge to work with such a technology, which slow down the process of implementing new technology. Staff wanted to learn and use new system but the changes were done on such a high pace that made staff overworked and stress, which made them loose the interest in the innovation. The management decided to use COMPAS as a new system without realizing that it’s not completing functional and it still have lots of issues but they still went ahead with it. When staff start using and found the problems and not enough technical support was supplied then they didn’t have any choice and they went back to old way of doing things (Tusiime, Byrne 2011). The whole process was effected by how the work is done in specific area and people behaviours.

In the article “Multi-level functionality of social media in the aftermath of the Great East Japan Earthquake,” (Jung, Moro 2014) author adopts technically rational perspective. In any future humanitarian disaster situation, social media can be used as an effective source of communication method conjunction with mass media. After the disaster twitter was used more as a spreading and gathering information source instead of using as social media platform.

In the article “Information flow impediments in disaster relief supply chains,” Journal of the Association for Information Systems,10(8), pp. 637-660.(Day, Junglas et al. 2009) author proposed development of IS for information sharing based on hurricane Katrina. Author adopted TR perspective because need of IS development for information flow within and outside of organisation is essential. This developed IS will help to manage complex supply chain management. Supply chain management in disaster situation is challenging as compare to traditional supply chain management. Supply chain management IS should be able to cater all types of dynamic information, suggested Day, Juglas and Silva (2009).

Case study Description:

On the 11 march 2011 at the scale of 9.00 magnitude hit north-eastern part of japan. This was followed by tsunami. Thousands of people lost their lives and infrastructure was completely damaged in that area (Jung, Moro 2014). Tsunami wiped off two towns of the maps and the costal maps had to be redrawn (Acar, Muraki 2011). On the same day of earth quake cooling system in nuclear reactor no 1 in Fukushima failed because of that nuclear accident Japanese government issued nuclear emergency. On the evening of the earthquake Japanese government issued evacuation order for 3 km area around reactor (Jung, Moro 2014). On March 12 hydrogen explosion occurred in the nuclear reactor because of failed cooling system which is followed by another explosion after 2 days on March 14. The area of evacuation was 3 km in the start but was increased to 20 km so avoid any nuclear radiation. This was one of the big nuclear disaster for the country so it was hard for the government to access the scale of the disaster. As the government officials, didn’t came across this kind situation before and couldn’t estimate the damage occurred because of incident. Government officials were adding more confusion in people with their unreliable information. They declare the accident level as 5 on the international nuclear scale but later they changed it to 7 which was highest on international nuclear scale. Media reporting was also confusing the public. The combination of contradicting information from government and media increase the level of confusion in the public. In the case of disaster Mass media is always the main source of information normally they discontinue their normal transmission and focus on the disaster. Their most of the airtime is devoted for the disaster so they can keep the people update about the situation. Normally mass media provides very reliable information in humanitarian disaster situation but in the case of japan disaster media was contradicting each other news e.g. international media was contradicting the news from local media as well as local government so people start losing faith in the mass media and start relying on different source to get information. Second reason was that the mass media was traditional way of gathering information and because of changes in technology people start using mobile phone and internet. Third main reason people start looking to get the information from different mean because the infrastructure for mass media was damage and lot of people cannot access the services of Television, so they start depending on video streaming sites e.g. ustream and YouTube. People start using twitter on big scale to spread and gather news. There was 30 percent of users increased on twitter within first week of disaster and 60 percent of twitter user thinks that it was useful for gather or spread information.

Case Study Analysis:

Twitter is one of the social media platform and micro blogging website, you can have 140 character in one tweet. It is different from other social media plate form because any one can follow you and they don’t need your authorization. Only register member can tweet but to read a message registration is not required. The author of “Multi-level functionality of social media in the aftermath of the Great East Japan Earthquake,” (Jung, Moro 2014) discuss about the five functionalities of twitter by the help of conceptual model of multi-level social media. The following figure describes the five primary function model in twitter very clearly.

Fig No 1 Source: (Jung, Moro 2014)

The five functionality was derived on survey and review of selected twitter timelines.

The first function was having tweets between individual it’s also known as interpersonal communication with others. It is micro level of conceptual model, in this level people from country and outside of a country were connecting other people who were is the affected area. The most of tweets were for checking safety of people that they are safe after the disaster, to inform your love ones if you were at affected area and needs any help or to inform people that you are safe. In the first three days high percentage of tweets were from micro level communication channel.

The second function was having communication channel for local organisation, local government and local media. It is meso level of conceptual model in this communication channel local governments open new accounts and re activate accounts which wasn’t used for a while to keep their local residents informed, the follower of twitter accounts increased very fast. People have understand the importance of social media and benefits of it after the disaster when the infrastructure was damaged and they were having electricity cut out but they were still able to get the information about the disasters and tsunami warnings. Local government and local media used twitter accounts to give different alerts and news e.g. the alert of tsunami was issued on twitter and after tsunami the reports of damage was released on twitter. Local media open new twitters channels and kept people informed about situation. Different organisation e.g. embassies of different countries used twitter to keep their nationals informed about situation about disaster and this was best way of communication between embassies and their nationals. Nationals can even let their embassy that they are struck in affected area and they need any help because they can be in very vulnerable situation as they are not in their country.

The third function was having communication of Mass media which is known as Macro level. Mass media used social platform to broadcast their news because the infrastructure was damage and people in effected area couldn’t access their broadcast. There were some people who were not in the country so they couldn’t access the local mass media news on television so they watching news on video streaming website as the demand increased most of mass media open the accounts on social media to fulfil the requirements. They start broadcasting their news on video streaming websites like YouTube, Ustream. Mass media was giving news updates several times a day on twitter as well and lot of people who were reading it also was retweeting them so information was spreading on very high speed.

The fourth function was information sharing and gathering which is known as cross level. Individual used social media to get the information about earthquake, tsunami and nuclear accident. When someone try to find information they come across the tweets which were for micro level, meso level and macro level. This level is great use when you are looking for help and you want to know different people opinions if they were in that situation what would they have done. The research done on the twitter time line proofs that on the day of earthquake people were tweeting regarding the shelters available and information about transport (Jung, Moro 2014).

The fifth function was direct channels between individuals and the mass media, government and the public. This is also consider as cross level. In this level individual could inform government and mass media about the situation of effected area because of disaster there were some places where government and mass media couldn’t reach, so they didn’t know the situation. Mayor of Minami-soma city which was 25 miles away from Fukushima used you tube to tell the government the threat of radiation to his city, the video went viral and Japanese government have international pressure to evacuate the city. (Jung, Moro 2014)

Reflection:

There was gradually change in use of social media to use a communication tool instead of social media platform in event of disaster. The multi-level functionality is one of the important characteristic which connects it very well with existing media. This is complete prescription which can be used in and after any kind of disaster. Social media can be used with other media as an effective communication methods to prepare for emergency in any future disaster situation.

Twitter played a big role in the communication in the disaster in japan. It was used to spread information, gather information about earthquake, tsunami and nuclear reactor accident. It was used to help request, issue warning about earthquake, tsunami and nuclear reactor accident. It was also used for condolences. Twitter has lot of benefits but it has some drawbacks which has to be rectify. The biggest issue in tweets are unreliability, anyone can tweet any information and there is no check and balance on it, only the person who do that tweet is responsible for the authentic information. There is no control on false information and it spreads so fast that it can create anxiety in people because of contradicted information e.g. if the false information about the range of radiation was released by individual and retweets by other individual who didn’t had any knowledge about the effect of radiation and nuclear accident it would had caused a panic in people. In the case of disaster, it is very important that reliable and right information is released.

Information system can play vital role in humanitarian disasters in all aspects. It can be used in the better communication, it can be used to improve the efficiency and accountability of the organisation. The data will be available widely in the organisation so it can have monitoring on the finances. It helps to coordinate different operation in organisations e.g. transport, supply chain management, logistics, finance and monitoring.

Social media has played a significant role in communicating, disseminating and storing data related to disasters. There is a need of control of that information being spread over the social media since not all type of information is authentic or verified.

IS based tools needs to be developed for disaster management in order to get best result from varied range of data extracted from social media and take necessary action for the wellbeing of people in disaster area.

The outcome of using purpose built IS, will be supportive in making decisions to develop strategy to deal with the situation. Disaster management team will be able to analyse the data in order to train the team for a disaster situation.

2017-1-12-1484253744

Renewable energy in the UK: essay help

The 2014 IPCC report stated that anthropogenic emissions of greenhouse gases have led to unprecedented levels of carbon dioxide, methane and nitrous oxide in the environment. The report also stated that the effect of greenhouse gases is extremely likely to have caused the global warming we have witnessed since the 20th century.

The 2018 IPCC report set new targets, aiming to limit climate change to a maximum of 1.5°C. To reach this, we will need zero CO₂ emissions by the year 2050. Previous IPCC targets of 2°C change allowed us until roughly 2070 to reach zero emissions. This means government policies will have to be reassessed and current progress reviewed in order to confirm whether or not the UK is capable of reaching zero emissions by 2050 on our current plan.

Electricity Generation

Fossil fuels are natural fuels formed from the remains of prehistoric plant and animal life. Fossil fuels (coal, oil and gas) are crucial in any look at climate change as when burned they release both carbon dioxide (a greenhouse gas) and energy. Hence, in order to reach the IPCC targets the UK needs to drastically reduce its usage of fossil fuels, either through improving efficiency or by using other methods of energy generation.

Whilst coal is a cheap energy source used to generate approximately 40% of the world’s electricity , it’s arguably the most damaging to the environment as coal releases more energy into the atmosphere in relation to energy produced than any other fuel source. Coal power stations generate electricity by burning coal in a combustion chamber and using the heat energy to transform water to steam which turns the propeller-like blades within the turbine. A generator (consisting of tightly-wound metal coils) is mounted at one end of the turbine and when rotated at a high velocity through a magnetic field, generates electricity. However the UK has made great claims to fully eradiate the use of coal in electricity generation by 2025. These claims are well substantiated by the UK’s rapid decline in coal use. In 2015 coal accounted for 22% of electricity generated in the UK, this was down to only 2% by the second quarter of 2017 and in April 2018 the UK even managed to go 72 hours powered without coal.

Natural gas became a staple of British electrical generation in the 1990s, when the Conservative Party got into power and privatised the electrical supply industry. The “Dash for gas” was triggered by legal changes within the UK and EU allowing for greater freedom to use gas in electricity generation.

Whilst natural gas emits less CO₂ than coal, it emits far more methane. Methane doesn’t remain in the atmosphere as long but it traps heat to a far greater extent. According to the World Energy Council methane emissions trap 25 times more heat than CO₂ over a 100 year timeframe.

Natural gas produces electrical energy in a gas turbine. Natural gas is mixed with the hot air and burned in a combustor. The hot gas then pushes turbine blades and as in coal plant, the turbine is attached to a generator, creating electricity. Gas turbines are hugely popular as they are a cheap source of energy generation and they can quickly be powered up to respond to surges in electrical demand.

Combined Cycle Gas Turbines (CCGT) are an even better source of electrical generation. Whilst traditional gas turbines are cheap and fast-reacting, they only have an efficiency of approximately 30%. Combined cycle turbines, however, are gas turbines used in combination with steam turbines giving an efficiency of between 50 and 60%. The hot exhaust from the gas turbine is used to create steam which rotates turbine blades and a generator in a steam turbine. This allows for greater thermal efficiency.

Nuclear energy is a potential way forward as no CO₂ is emitted by Nuclear power plants. Nuclear plants aim to capture the energy released by atoms undergoing nuclear fission. In nuclear fission, nuclei absorb neutrons as they collide thus making an unstable nucleus. The unstable nucleus will then split into fission products of smaller mass and emit two or three high speed neutrons which can then collide with more nuclei, making them unstable thus creating a chain reaction. The heat energy produced by splitting the atom is first converted can be used to produce steam which will be used by a turbine generator to produce electricity.

Currently, 21% of electricity generated in the UK comes from nuclear energy. In the 1990s, 25% of electricity came from nuclear energy but gradually old plants have been retired. By 2025, UK nuclear power could half. This is due to a multitude of reasons. Firstly, nuclear fuel is expensive in comparison to gas and coal. Secondly, nuclear waste is extremely radioactive and so must be dealt with properly. Also, in light of tragedies such as Chernobyl and Fukushima, much of the British public expressed concerns surrounding Nuclear energy with the Scottish government refusing to open more plants

In order to lower our CO₂ emissions it is crucial we also utilise renewable energy. The UK currently gets very little of its energy from renewable sources but almost all future plans place a huge emphasis on renewables.

The UK has great wind energy potential as the nation is the windiest country in the EU with 40% of the total wind that blows across the EU.

Wind turbines are straightforward machinery; the wind turns the turbine blades around a rotor which is connected to the main shaft which spins a generator, creating electricity. In 2017, onshore wind generated enough energy to power 7.25 million homes a year and generated 9% of the UK’s electricity. However, despite the clear benefits of clean, renewable energy, wind energy is not without its problems. Firstly, it is an intermittent supply – the turbine will not generate energy when there is no wind. Also it has been opposed by members of the public for affecting the look of the countryside and bird fatalities. These problems are magnified by the current conservative government’s stance on wind energy who wish to limit onshore wind farm development despite public opposition to this “ban”.

Heating and Transport

Currently it is estimated a third of carbon dioxide (CO2) emissions in the UK are accounted for in the heating sector. 50% of all heat emissions in the UK exist for domestic use, consequently making it the main source of CO2 emissions in the heating sector. Around 98% of domestic heating is used for space and water heating. The government has sought to reduce the emissions from domestic heating alone by issuing a series of regulations on new boilers. Regulations state as of 1st April 2005 all new installations and replacements of boilers are required to be condensing boilers. As well as CO2 emissions being much lower, condensing boilers are around 15-30% more efficient than older gas boilers. Reducing heat demand has also been an approach taken to reduce emissions. For instance, building standards in the UK have set higher levels of required thermal insulations of both domestic and non-domestic buildings when refurbishing and carrying out new projects. These policies are key to ensure that both homes are buildings in industry are as efficient as possible when it comes to conserving heat.

Although progress is being made in terms of improving current CO2 reducing systems, the potential for significant CO2 reductions rely upon low carbon technologies. Highly efficient technologies such as the residential heat pump and biomass boilers have the potential to be carbon neutral sources of heat and in doing so could massively reduce CO2 emissions for domestic use . However, finding the best route to a decarbonised future in the heating industry relies upon more than just which technology has the lowest carbon footprint. For instance, intermittent technologies such as solar thermal collectors cannot provide a sufficient level of heat in the winter and require a back-up source of heat making them a less desirable source of heat . Cost is also a major factor in consumer preference. For most consumers, a boiler is the cheapest option for heating. This provides a problem for low carbon technologies which tend to have significantly higher upfront costs . In response to the cost associated with these technologies, the government has introduced policies such as the ‘Renewable Heat Incentive’ which aims to alleviate the expense through paying consumers for each unit of heat produced by low carbon technologies. Around 30% of the heating sector is allocated for industry use, making it the second largest cause of CO2 in this sector . Currently, combined heat and power (CHP) is the main process used to make industrial heat use more efficient and has shown CO2 reductions of up to 30%. Although this is a substantial reduction in CO2, alternative technology has the potential to deliver even higher reductions. For example, the process of carbon capture storage (CCS), has the potential to reduce CO2 emissions by up to 90% . However, CCS is a complex procedure which would require a substantial amount of funding and as a result is not currently implemented for industrial use in the UK.

Although heating is a significant contribution to CO2 emissions in the UK, there is also much needed progress elsewhere. In 2017 it was estimated that 34% of all carbon dioxide (CO2) emissions in the UK were caused by transport and is widely thought to be the sector in which least progress is being made, with only seeing a 2% reduction in CO2 emissions since 1990. Road transport contributes the highest proportion of emissions, more specifically petrol and diesel cars. Despite average CO2 emissions of new vehicles declining, the carbon footprint of the transport industry continues to increase due to the larger number of vehicles in the UK.

In terms of progress, CO2 emissions of new cars in 2017 were estimated to be 33.1% lower than the early 2000s. Although efficiencies are improving, more must be done if we are to conform to the targets set from the Climate Change Act 2008. A combination of decarbonising transport and implementing government legislation is vital to have the potential to meet these demands. New technology such as battery electric vehicles (BEV’s) have the potential to create significant reductions in the transport industry. As a result, a report from the committee of climate change suggests that 60% of all sales of new cars and vans should be ultra-low emission by 2030. However, the likeliness of achieving this is hindered by the constraints of new technologies. For instance, low emission vehicles are likely to have significantly higher costs and lack consumer awareness. This reinforces the need of government support in projecting new technologies and cleaner fuels. To support the development and uptake of low carbon vehicles the government has committed £32 million for the funding of charging infrastructure of BEV’s from 2015-2020 and a further £140 million has been allocated to the ‘low carbon vehicle innovation platform’ which strives to advance the development and research of low emission vehicles. Progress has also been made to make these vehicles more cost competitive through being exempt from taxes such as Vehicle Excise Duty and providing incentives such as plug in grants of up to £3,500. Aside from passenger cars, improvements are also being made to emissions of public transport. The average low emission bus in London could reduce its CO2 emissions by up to 26 tonnes per year subsequently acquiring the governments support in England of the ‘Green Bus Fund’.

Conclusion

In 2017, renewables accounted for a record 29.3% of the UK’s energy generation. This is a vast improvement on previous years and suggests the UK is on track to meet the new IPCC targets although a lot of work still needs to be done. Government policies do need to be reassessed in light of the new targets however. Scotland should reassess its nuclear policy as this might be a necessary stepping stone in reduced emissions until renewables are able to fully power the nation and the UK government needs to reassess its allocation of funding as investment in clean energy is on a current downward trajectory.

Although progress has been made to reduce CO2 emissions in the heat and transport sector, emissions throughout the UK remain much higher than desired. The committee of climate change report to parliament (2015), calls for the widespread electrification of heating and transport by 2030 to help prevent a 1.5 degree rise in global temperature. This is likely to pose as a major challenge and will require a significant increase in electricity generation capacities in conjunction with greater policy intervention to encourage the uptake of low carbon technologies. Although the likelihood of all consumers switching to alternative technologies are sparse, if the government continues to tighten regulations surrounding fossil fuelled technologies whilst the heat and transport industry continue to develop old and new systems to become more efficient this should see significant CO2 reductions in the future.

2018-11-19-1542623986

Is Nuclear Power a viable source of energy?: college application essay help

6th Form Economics project:

Nuclear power, the energy of the future of the 1950s, is now starting to feel like the past. Around 450 nuclear reactors worldwide currently generate 11% of the world electricity, or approximately 2500 TWh in a year, just under the total nuclear power generated globally in 2001 and only 500 TWh more than in 1991. The number of operating reactors worldwide has seen the same stagnation, with an increase of only 31 since 1989, or an annual growth of only 0.23% compared to 12.9% from 1959 to 1989. Most reactors, especially in Europe and North America, where built before the 90s and the average age of reactors worldwide is just over 28 years. Large scale nuclear accidents such as Chernobyl in 1986 or, much more recently, Fukushima in 2011 have negatively impacted public support for nuclear power and helped cause this decline, but the weight of evidence has increasingly suggested that nuclear is safer than most other energy sources and has an incredibly low carbon footprint, causing the argument against nuclear to shift from concerns about safety and the environment to questions about the economic viability of nuclear power. The crucial question that remains is therefore about how well nuclear power can compete against renewables to produce the low carbon energy required to tackle global warming.

The costs of most renewable energy sources have been falling rapidly and increasingly able to outcompete nuclear power as a low carbon option and even fossil fuels in some places; photovoltaic panels, for example, have halved in price from 2008 to 2014. Worse still for nuclear power, it seems that while costs of renewable energy have been falling, plans for new nuclear plants have been plagued with delays and additional costs: in the UK, Hinkley Point C power station is set to cost £20.3bn, making it the world’s most expensive power station, and significant issues in the design have raised questions as to whether the plant will be completed by 2025, it’s current goal. In France, the Flamanville 3 reactor is now predicted to cost three times its original budget and several delays have pushed the start up date, originally set for 2012, to 2020. The story is the same in the US, where delays and extra costs have plagued the construction of the Vogtle 3 and 4 reactors which are now due to be complete by 2020-21, 4 years over their original target. Nuclear power seemingly cannot deliver the cheap, carbon free energy it promised and is being outperformed by renewable energy sources such as solar and wind.

The crucial and recurring issue with nuclear power is that it requires huge upfront costs, especially when plants are built individually, and can only provide revenue years after the start of construction. This means that investment into nuclear is risky, long term and cannot be done well on a small scale, though new technologies such as SMRs (Small Modular Reactors) may change this in the coming decades, making it a much bigger gamble. Improvements in other technologies over the period of time a nuclear plant is built means that is often better for private firms, who are less likely to be able to afford large scale programs enabling significant cost reductions or a lower debt to equity ration in their capital structure, to invest in more easily scalable and shorter term energy sources, especially with subsidies favouring renewables in many developed countries. All of this points to the fundamental flaw of nuclear: that it requires going all the way. Small scale nuclear programs that are funded mostly with debt, that have high discount rates and low capacity factors as they are switched off frequently will invariably have a very high Levelised Cost of Energy (LCOE) as nuclear is so capital intensive.

That said, the reverse is true as well. Nuclear plants have very low operating costs, almost no external costs and the cost of decommissioning a plant are only a small portion of the initial capital cost, even with a low discount rate such as 3%, due to the long lifespan of a nuclear plant and the fact that many can be extended. Operating costs include fuel costs, which are extremely low for nuclear, costing only 0.0049 USD per kWh, and non-fuel operation and maintenance costs which are barely higher at 0.0137 USD per kWh. This includes waste disposal, a frequently cited political issue that has no longer been relevant technically for decades as waste can be reused relatively well and stored on site safely at very low costs simply because the quantity of fuel used and therefore waste produced is so small. The fuel, uranium is abundant and technology enabling uranium to be extracted from sea water would give access to a 60,000 year supply at present rates of consumption so costs from ‘resource depletion’ are also small. Finally, external costs represent a very small proportion of running costs: the highest estimates for health costs and potential accident are at 5€/MWh and 4€/MWh respectively, though some estimates fall to only 0.3€/MWh for potential accidents when past records are adjusted to try and factor in improvements in safety standards; though these vary significantly due to the fact that the total number of reactors is very small.

Nuclear power therefore remains still one of the cheapest ways to produce electricity in the right circumstances and many LCOE (Levelised Cost of Energy) estimates, which are designed to factor in all costs over the life time of a unit to give a more accurate representation of the costs of different types of energy, though they usually omit system costs, point to nuclear as a cheaper energy source than almost all renewables and most fossil fuels at low discount rates.

LCOE costs taken from ‘Projected Costs of Generating Electricity 2015 Edition’ and system costs taken from ‘Nuclear Energy and Renewables (NEA, 2012)’ have been combined by the World Nuclear association to give LCOE for four countries to compare the costs of nuclear to other energy sources. A discount rate of 7% is used, the study applies a $30/t CO2 price on fossil fuel use and uses 2013 US$ values and exchange rates. It is important to bear in mind that LCOE estimates vary widely as many assume different circumstances and they are very difficult to calculate, but it is clear from the graph that nuclear power is more than still viable; being the cheapest source in three of the four countries and third cheapest in the fourth behind onshore wind and gas.

2019-5-13-1557759917

Decision making during the Fukushima disaster

Introduction

On March 11, 2011 a tsunami struck the east coast of Japan, which resulted in a disaster at the Fukushima Daiichi nuclear power plant. During the day commencing the natural disaster many decisions were made with regards to managing the crisis. This paper will examine these decisions made during the crisis. The Governmental Politics Model, a model designed by Allison and Zelikow (1999), will be adopted to analyse the events. Therefore, the research question of this paper is: To what extent does the Governmental Politics Model explain the decisions made during the Fukushima disaster.

First, this paper will lay the theoretical basis for an analysis. The Governmental Politics Model and all crucial concepts within it are discussed. Then a conscription of the Fukushima case will follow. Since the reader is expected to already have general knowledge regarding the Fukushima Nuclear disaster the case description will be very brief. With the theoretical framework and case study a basis for the analysis is laid. The analysis will look into the decisions government and Tokyo Electric Power Company (TEPCO) officials made during the crisis.

Theory

Allison and Zelikow designed three theories to understand the outcomes of bureaucracies and decision making in the aftermath of the Cuban Missile Crisis in 1962. The first theory to be designed was the Rational Actor Model. This model focusses on the ‘logic of consequences’ and has a basic assumption of rational actions of a unitary actor. The second theory designed by Allison and Zelikow is the Organizational Behavioural Model. This model focusses on the ‘logic of appropriateness’ and has a main assumption of loosely connected allied organizations (Broekema, 2019).

The third model thought of by Allison and Zelikow is the Governmental Politics Model (GPM). This model reviews the importance of power in decision-making. According to the GPM decision making has not to do with rational/unitary actors or organizational output but everything with a bargaining game. This means that governments make decisions in other ways, according to the GPM there are four aspects to this. These aspects are: the choices of one, the results of minor games and of central games and foul-ups (Allison & Zelikow, 1999).

The following concepts are essential in the GPM. First, it is important to note that power in government is shared. Different institutions have independent bases and, therefore, power is shared. Second, persuasion is an important factor in the GPM. The power to persuade differentiates power from authority. Third, bargaining according to the process is identified, this means there is a structure in the bargaining processes. Fourth, power equals impact on outcome is mentioned in the Essence of Decision making. This means that there is a difference between what can be done and what is actually done, and what is actually done has to do with the power involved in the process. Lastly, intranational and international relations are of great importance to the GPM. These relations are intertwined and involve a vast set if international and domestic actors (Allison & Zelikow, 1999).

Not only the five previous concepts are relevant for the GPM. The GPM is inherently based on group decisions, in this type of decision making Allison and Zelikow identify seven factors. The first factor is a positive one, group decisions, when met by certain requirements create better decisions. Secondly, the agency problem is identified, this problem includes information asymmetric and the fact that actors are competing over different goals. Third, it is important to identify the actors in the ‘game’. This means that one has to find out who participates in the bargaining process. Fourth, problems with different types of decisions are outlined. Fifth, framing issues and agenda setting is an important factor in the GPM. Sixth, group decisions are not necessarily positive, they can lead to groupthink easily. This is a negative consequence and means that no other opinions are considered. Last, the difficulties in collective actions is outlined by Allison and Zelikow. This has to do with the fact that the GPM does not consider unitary actors but different organizations (Allison & Zelikow, 1999).

Besides the concepts mentioned above the GPM consists of a concise paradigm too. This paradigm is essential for the analysis of the Fukushima case. The paradigm exists of six main points. The first main point is the fact that decisions are the result of politics, this is the GPM and once again stresses the fact that decisions are the result of bargaining. Second, as said before, it is important to identify the players of the political ‘game’. Furthermore, one has to identify their preferences and goals and what kind of impact they can have on the final decision. Once this is analysed, one has to look at what the actual game is that is played. The action channels and rules of the game can be determined. Third, the ‘dominant inference pattern’ once again goes back to the fact that the decisions are the result of bargaining, but this point makes clear that differences and misunderstandings have to be taken into account. Fourth, Allison and Zelikow identify ‘general propositions’ this term includes all concepts examined in the second paragraph of the theory section of this paper. Fifth, specific propositions are looked at, these specify to decisions on the use of force and military action. Last, is the importance of evidence. When examining crisis decision making documented timelines and for example, minutes or other account are of great importance (Allison & Zelikow, 1999).

Case

In the definition of Prins and Van den Berg (2018) the Fukushima Daiichi disaster can be regarded as a safety case, this is because it was an unintentional event that caused harm to humans.

The crisis was initiated by an earthquake of 9.0 on the Richter scale which was followed by a tsunami, which waves reached a height of 10 meters. Due to the earthquake all external power lines, which are needed for cooling the fuel rods, were disconnected. Countermeasures for this issue were in place, however, the water walls were unable to protect the nuclear plant from flooding. This caused the countermeasures, the diesel generators, to be inadequate (Kushida, 2016).

Due to the lack of electricity, the nuclear fuel rods were not cooled, therefore, a ‘race for electricity’ started. Eventually the essential decision to inject sea water was made. Moreover, the situation inside the reactors was unknown. Meltdowns in reactors 1 and 2 already occurred. Because of explosions risks the decision to vent the reactors was made. However, hydrogen explosions materialized in reactors 1,2 and 4. This in turn led to the exposure of radiation to the environment. To counter the disperse of radiation the decision to inject sea water to the reactors was made (Kushida, 2016).

Analysis

This analysis will look into the decision or decisions to inject seawater in the damaged reactors. First, a timeline of the decisions will be outlined to further build on the case study above. Then the events and decisions made will be paralleled to the GPM paradigm with the six main points as described in the theory.

The need to inject sea water arose after the first stages as described in the case study passed. According to Kushida government officials and political leaders began voicing the necessity of injecting the water at 6:00 p.m., the day after the earthquake, on March 12. It would according to these officials have one very positive outcome, namely, the cooling of the reactors and the fuel pool. However, the use of sea water might have negative consequences too. It would ruin the reactors because of the salt in the sea water and it would produce vast amounts of contaminated water which would be hard to contain (Kushida, 2016). TEPCO experienced many difficulties with cooling the reactors, as is described in the case study, because of the lack of electricity. However, they were averse to injecting sea water into the reactors since this would ruin them. Still, after the first hydrogen explosion occurred in reactor one TEPCO plant workers started the injection of sea water in this specific reactor (Holt et al., 2012). A day later, on March 13, sea water injection started in reactor 3. On the 14th of March, seawater injection started in reactor 2 (Holt et al., 2012).

When looking at the decisions made by the government or TEPCO plant workers it is crucial to consider the chain of decision making by TEPCO leadership too. TEPCO leadership was in the first instance not very positive towards injecting seawater because of the earlier mentioned disadvantages, the plant would become unusable in the future and vast amounts of contaminated water would be created. Therefore, the government had to issue an order to TEPCO to start injecting seawater. They did so at 8:00 p.m. on 12 March. However, Yoshida, the Fukushima Daiichi Plant Manager already started injecting seawater at 7:00 p.m. (Kushida, 2016).

As one can already see different interests were at play and the outcome of the eventual decision can well be a political resultant. Therefore, it is crucial to examine the chain of decisions through the GPM paradigm. The first factor of this paradigm concerns decisions as a result of bargaining, this can clearly be seen in the decision to inject seawater. TEPCO leadership initially was not a proponent of this method, however, after government officials ordered them to execute the injection they had no choice. Second, according to the theory, it is important to identify the players of the ‘game’ and their goals. In this instance these divisions are easily identifiable, three different players can be pointed out. The different players are the government, TEPCO leadership and Yoshida, the plant manager. The Government has as a goal to keep their citizens safe during the crisis, TEPCO wanted to maintain the reactor as long as possible, whereas, Yoshida wanted to contain the crisis. This shows there were conflicting goals in that sense.

To further apply the GPM to the decision to inject seawater one can review the comprehensive ‘general proposition’. In this part miscommunication is a very relevant factor. Miscommunication was certainly a big issue in the decision to inject seawater. As said before Yoshida, already started injecting seawater before he received approval from his chiefs. One might even wonder whether or not there was a misunderstanding of the crisis by TEPCO leadership because of the fact that they hesitated to inject seawater necessary to cool the reactors. It can be argued that this hesitation constitutes a great deal of misunderstanding of the crisis since there was no plant to be saved anymore at the time the decision was made.

The fifth and sixth aspect of the GPM paradigm are less relevant to the decisions made. This is because ‘specific proposition’ refers to the use of force, which was not an option in dealing with the Fukushima crisis. The Japanese Self-Defence forces were dispatched to the plant; however, this was to provide electricity (Kushida, 2016). Furthermore, the sixth aspect, evidence is not as important in this case since many scholars, researchers and investigators have written to a great extent about what happened during the Fukushima crisis, more than sufficient information is available.

The political and bargaining game in the decision to inject seawater into the reactors is clearly visible. The different actors in the game had different goals, however, eventually the government won this game and the decision to inject seawater was made. Even before that the plant manager already to inject seawater because the situation was too dire.

Conclusion

This essay reviewed decision making during the Fukushima Daiichi Nuclear Power Plant disaster on the 11th of March 2011. More specifically the decision to inject seawater into the reactors to cool them was scrutinized. This was done by using the Governmental Politics Model. The decision to inject seawater into the reactors was a result of a bargaining game and different actors with different objectives played the decision-making ‘game’.

2019-3-18-1552918037

Tackling misinformation on social media: college essay help online

As the world of social media expands, the ratio of miscommunication rises as more organisations hop on the bandwagon of utilising the digital realm to their advantage. Twitter, Facebook, Instagram, online forums and other websites become the pinnacle of news gathering for many individuals. Information becomes easily accessible to all walks of life meaning that people are becoming more integrated about real life issues. Consumers absorb and take information in as easy as ever before which proves to be equally advantageous and disadvantageous. But, There is an evident boundary in which the differentiation of misleading and truthful information is hard to cross without research on the topic. The accuracy of public information is highly questionable which could easily lead to problems. Despite there being a debate about source credibility in any platform, there are ways to tackle the issue through “expertise/competence (i. e., the degree to which a perceiver believes a sender to know the truth), trustworthiness (i. e., the degree to which a perceiver believes a sender will tell the truth as he or she knows it), and goodwill”. (Cronkhite & Liska (1976)) Which is why it has become critical for this to be accurate, ethical and reliable for the consumers. Verifying information is important regardless of the type of social media outlet. This essay will be highlighting the importance of why information need to fit this criteria.

By putting out credible information it prevents and reduces misconception, convoluted meanings and inconsistent facts which reduce the likeliness of issues surfacing. This in turn saves time for the consumer and the producer. The presence of risk raises the issue of how much of this information should be consumed by the public. The perception of source credibility becomes an important concept to analyse within social media, especially in terms of crisis where rationality reduces and the latter often just take the first thing that is seen. With the increasing amount of information available through newer channels, the idea of releasing information from professionals of the topic devolve away from the producers and onto consumers. (Haas & Wearden, 2003) Many of the public is unaware that this information is prone to bias and selective information sharing which could communicate the actual facts much differently. One such example is the incident of Tokyo Electric Power Co.’s Fukushima No.1 nuclear power plant in 2011, where the plant experienced triple meltdowns. There is a misconception floating around that the food exported from Fukushima is too contaminated with radioactive substances making them unhealthy and unfit to eat. But the truth is that this isn’t the case when strict screening reveals that the contamination is below the government standard to pose a threat. (arkansa.gov.au) Since then, products shipped from Fukushima have dropped considerably in prices and have not recovered since 2011 forcing retailers into bankruptcy. (japantimes.co.jp) But thanks to the use of social media and organisations releasing information out into the public, Fukushima was able to raise funds and receive help from other countries. For example the U.S. sending $100,000 and China sending emergency supplies as assistance. (theguardian.com) This would have been impossible to achieve without the use of sharing credible, reliable and ethical information regarding the country and social media support spotlighting the incident.

Accurate, ethical and reliable information open the pathway for producers to secure a relationship with the consumers which can be used to strengthen their own businesses and expand their industries further whilst gaining support from the public. The idea is to have a healthy relationship without the air of uneasiness where monetary gains and social earnings increase. Social media playing a pivotal role in deciding the route the relationship falls in. But, When done incorrectly, organisations can become unsuccessful when they know little to nothing about the change of dynamics in consumers and behaviour in the digital landscape. Consumer informedness means that consumers are well informed about products or services available with precision influencing their willingness in decisions. This increase in consumer informedness can instigate change in consumer behaviour. (uni-osnabrueck.de) In the absence of accurate, ethical and reliable information, people and organisations will make terrible decisions with no hesitation. Which leads to losses and steps backwards. As Saul Eslake (Saul-Eslake.com) says, “they will be unable to help or persuade others to make better decisions; and no-one will be able to ascertain whether the decisions made by particular individuals or organisations were the best ones that could have been made at the time”. Recently, a YouTuber named Shawn Dawson made a video that sparked controversy to the company ‘Chuck E. Cheese’ for their pizzas slices that do not look like they belong to the whole pizza. He created a theory that part of the pizzas may have been reheated or recycled from other tables. In response Chuck E. Cheese responded in multiple media outlets to debunk the theory, “These claims are unequivocally false. We prep the dough daily for our made to order pizzas, which means they’re not always perfectly round, but they are still great tasting.” (https://twitter.com/chuckecheeses) It is worth bringing up that no information other than pictures back up the claim that they reused the pizza. The food company has also gone far to create a video showing the pizza preparation. To back as the support, ex-employees spoke up and shared their own side of the story to debunk the theory further. It’s these quick responses that saved what could have caused a small downfall in sale for the Chuck E. Cheese company. (washintonpost.com) This event highlights the importance on the release of information that can fall in favour to whoever utilises it correctly and the effectiveness of credible information that should be taken to heart. Credible information is good and bad especially when it has the support of others whether online or real life. The assumption or guess when there is no information available to base from is called a ‘heuristic value’ which is seen associated with information that has no credibility.

Mass media have been a dominant source of finding information (Murch, 1971). They are generally thought and assumed to provide credible, valuable, and ethical information open to the public (Heath, Liao, & Douglas, 1995). However, along with traditional forms of media, newer media are increasingly available for information seeking and reports. According to PNAS (www.pnas.org), “The emergence of social media as a key source of news content has created a new ecosystem for the spreading of misinformation. This is illustrated by the recent rise of an old form of misinformation: blatantly false news stories that are presented as if they are legitimate . So-called “fake news” rose to prominence as a major issue during the 2016 US presidential election and continues to draw significant attention.” This affects how we as social beings perceive and analyse information we see online compared to real life. Beyond just reducing the intervention’s effectiveness, failing to deduce stories from real to false increase the belief of false content. Leading to biased and misleading content that fool the audience. One such incident is Michael Jackson’s death in June 2009 where he died from acute propofol and benzodiazepine intoxication administered by his doctor, Dr. Murray. (nytimes.com) It was deduced from the public that Michael Jackson was murdered on purpose but the court convicted, Dr. Murray of involuntary murder as the doctor proclaimed that Jackson begged him to give more. A fact that was overlooked by the general due to bias. This underlines how information is selectively picked from the public and not all information is revealed to sway the audience. A study conducted online by Jason and his team (JCMC [CQU]) revealed that Facebook users tended to believe their friends almost instantly even without a link or proper citation to a website to backup their claim. “Using a person who has frequent social media interactions with the participant was intended to increase the external validity of the manipulation.” Meaning information online that can be taken as truth or not is left to the perception of the viewer linking to the idea that information online isn’t credible fully unless it came straight from the source. Proclaiming the importance of credible information to be released.

Information has the power to inform, explain and expand on topics and concepts. But it also has the power to create inaccuracies and confusion which is hurtful to the public and damages the reputation of companies. The goal is to move forward not backwards. Many companies have gotten themselves into disputes because of incorrect information which could have easily been avoided through releasing accurate, ethical and reliable information from the beginning. False Information can start disputes and true information can provide resolution. The public has become less attentive to mainstream news altogether which strikes a problem on what can be trusted. Companies and organisations need their information to be accurate and reliable as much as possible to defeat and reduce this issue. Increased negativity and incivility exacerbate the media’s credibility problem. “People of all political persuasions are growing more dissatisfied with the news, as levels of media trust decline.” (JCMC [CQU]) In 2010, Dannon’s ‘Activia Yogurt’ released an online statement and false advertisement that their yogurt had “special bacterial ingredients.” A consumer named, Trish Wiener lodged a complaint against Dannon. The yogurts were being marketed as being “clinically” and “scientifically” proven to boost the immune system while able to help to regulate digestion. However, the judge saw this statement as unproven. As well as many other products in their line that used this statement in their products. “This landed the company a $45 million class action settlement.” (businessinsider.com) it didn’t help that Dannon’s prices for their yogurt was inflated compared to other yogurts in the market. “The lawsuit claims Dannon has spent “far more than $100 million” to convey deceptive messages to U.S. consumers while charging 30 percent more that other yogurt products.” (reuters.com) This highlights how inaccurate information can cost millions of dollars to settle and resolve. However it also showed how the public can easily evict irresponsible producers from their actions and give leeway to justice.

2019-5-2-1556794982

Socio-political significance of Turkey’s emergent neo-Ottoman cultural phenomenon

Over the last decade, Turkey’s cultural sphere has witnessed a motto of Ottomania—a term describing the recent cultural fervor for everything Ottoman. Although this neo-Ottoman cultural phenomenon, is not entirely new since it had its previous cycle back in the 1980s and 1990s during the heyday of Turkey’s political Islam, it now has a rather novel characteristic and distinct pattern of operation. This revived Ottoman craze is discernable in what I call the neo-Ottoman cultural ensemble—referring to a growing array of Ottoman-themed cultural productions and sites that evoke Turkey’s Ottoman-Islamic cultural heritage. For example, the celebration of the 1453 Istanbul conquest no longer merely takes place as an annual public commemoration by the Islamists,[1] but has been widely promulgated, reproduced, and consumed into various forms of popular culture such as: the Panorama 1453 History Museum; a fun ride called the Conqueror’s Dream (Fatih’in Rüyası) at the Vialand theme park; the highly publicized and grossed blockbuster The Conquest 1453 (Fetih 1453); and the primetime television costume drama The Conqueror (Fatih). It is the “banal”, or “mundane,” ways of everyday practice of society itself, rather than the government or state institutions that distinguishes this emergent form of neo-Ottomanism from its earlier phases.[2]

This is the context in which the concept of neo-Ottomanism has acquired its cultural dimension and analytical currency for comprehending the proliferating neo-Ottoman cultural phenomenon. However, when the concept is employed in contemporary cultural debates, it generally follows two trajectories that are common in the literature of Turkish domestic and foreign politics. These trajectories conceptualize neo-Ottomanism as an Islamist political ideology and/or a doctrine of Turkey’s foreign policy in the post-Cold War era. This essay argues that these two conventional conceptions tend to overlook the complexity and hybridity of Turkey’s latest phase of neo-Ottomanism. As a result, they tend to understand the emergent neo-Ottoman cultural ensemble as merely a representational apparatus of the neoconservative Justice and Development Party’s (AKP; Adalet ve Kalkınma Partisi) ideology and diplomatic strategy.

This essay hence aims to reassess the analytical concept of neo-Ottomanism and the emergent neo-Ottoman cultural ensemble by undertaking three tasks. First, through a brief critique of the concept of neo-Ottomanism, I will discuss its common trajectories and limitations for comprehending the latest phase of neo-Ottoman cultural phenomenon. My second task is to propose a conceptual move from neo-Ottomanism to Ottomentality by incorporating the Foucauldian perspective of governmentality. Ottomentality is an alternative concept that I deployed here to underscore the overlapping relationship between neoliberal and neo-Ottoman rationalities in the AKP’s government of culture and diversity. I contend that neoliberalism and neo-Ottomanism are inseparable governing rationalities of the AKP and their convergence has engendered new modes of governing the cultural field as well as regulating inter-ethnic and inter-religious relations in Turkey. And finally, I will reassess the neo-Ottoman cultural ensemble through the analytical lens of Ottomentality. I contend that the convergence of neoliberal and neo-Ottoman rationalities has significantly transformed the relationships of state, culture, and the social. As the cases of the television historical drama Magnificent Century (Muhteşem Yüzyıl) and the film The Conquest 1453 (Fetih 1453) shall illustrate, the neo-Ottoman cultural ensemble plays a significant role as a governing technique that constitutes a new regime of truth based on market mentality and religious truth. It also produces a new subject of citizenry, who is responsible for enacting its right to freedom through participation in the culture market, complying with religious norms and traditional values, and maintaining a difference-blind and discriminatory model of multiculturalism.

A critique of neo-Ottomanism as an analytical concept

Although the concept of neo-Ottomanism has been commonly used in Turkish Studies, it has become a loose term referring to anything associated with the Islamist political ideology, nostalgia for the Ottoman past, and imperialist ambition of reasserting Turkey’s economic and political influence within the region and beyond. Some scholars have recently indicated that the concept of neo-Ottomanism is running out of steam as it lacks meaningful definition and explanatory power in studies of Turkish politics and foreign policy.[3] The concept’s ambiguity and impotent analytical and explanatory value is mainly due to the divergent, competing interpretations and a lack of critical evaluation within the literature.[4] Nonetheless, despite the concept being equivocally defined, it is most commonly understood in two identifiable trajectories. First, it is conceptualized as an Islamist ideology, responding to the secularist notions of modernity and nationhood and aiming to reconstruct Turkish identity by evoking Ottoman-Islamic heritage as an essential component of Turkish culture. Although neo-Ottomanism was initially formulated by a collaborated group of secular, liberal, and conservative intellectuals and political actors in the 1980s, it is closely linked to the consolidated socio-economic and political power of conservative middle-class. This trajectory considers neo-Ottomanism as primarily a form of identity politics and a result of political struggle in opposition to the republic’s founding ideology of Kemalism. Second, it is understood as an established foreign policy framework reflecting the AKP government’s renewed diplomatic strategy in the Balkans, Central Asia, and Middle East wherein Turkey plays an active role. This trajectory regards neo-Ottomanism as a political doctrine (often referring to Ahmet Davutoglu’s Strategic Depth serving as the guidebook for Turkey’s diplomatic strategy in the 21st century), which sees Turkey as a “legitimate heir of the Ottoman Empire”[5] and seeks to reaffirm Turkey’s position in the changing world order in the post-Cold War era.[6]

As a result of a lack of critical evaluation of the conventional conceptions of neo-Ottomanism, contemporary cultural analyses have largely followed the “ideology” and “foreign policy” trajectories as explanatory guidance when assessing the emergent neo-Ottoman cultural phenomenon. I contend that the neo-Ottoman cultural phenomenon is more complex than what these two trajectories offer to explain. Analyses that adopt these two approaches tend to run a few risks. First, they tend to perceiveneo-Ottomanism as a monolithic imposition upon society. They presume that this ideology, when inscribed onto domestic and foreign policies, somehow has a direct impact on how society renews its national interest and identity.[7] And they tend to understand the neo-Ottoman cultural ensemble as merely a representational device of the neo-Ottomanist ideology. For instance, Şeyda Barlas Bozkuş, in her analyses of the Miniatürk theme park and the 1453 Panorama History Museum, argues that these two sites represent the AKP’s “ideological emphasis on neo-Ottomanism” and “[create] a new class of citizens with a new relationship to Turkish-Ottoman national identity.”[8] Second, contemporary cultural debates tend to overlook the complex and hybrid nature of the latest phase of neo-Ottomanism, which rarely operates on its own, but more often relies on and converges with other political rationalities, projects, and programs. As this essay shall illustrate, when closely examined, current configuration of neo-Ottomanism is more likely to reveal internal inconsistencies as well as a combination of multiple and intersecting political forces.

Moreover, as a consequence of the two risks mentioned above, contemporary cultural debates may have overlooked some of the symptomatic clues, hence, underestimated the socio-political significance of the latest phase of neo-Ottomanism. A major symptomatic clue that is often missed in cultural debates on the subject is culture itself. Insufficient attention has been paid to the AKP’s rationale of reconceptualizing culture as an administrative matter—a matter that concerns how culture is to be perceived and managed, by what culture the social should be governed, and how individuals might govern themselves with culture. At the core of the AKP government’s politics of culture and neoliberal reform of the cultural filed is the question of the social.[9] Its reform policies, projects, and programs are a means of constituting a social reality and directing social actions. When culture is aligned with neoliberal governing rationality, it redefines a new administrative culture and new rules and responsibilities of citizens in cultural practices. Culture has become not only a means to advance Turkey in global competition,[10] but also a technology of managing the diversifying culture resulted in the process of globalization. As Brian Silverstein notes, “[culture] is among other things and increasingly to be seen as a major target of administration and government in a liberalizing polity, and less a phenomenon in its ownright.”[11] While many studies acknowledge the AKP government’s neoliberal reform of the cultural field, they tend to regard neo-Ottomanism as primarily an Islamist political agenda operating outside of the neoliberal reform. It is my conviction that neoliberalism and neo-Ottomanism are inseparable political processes and rationalities, which have merged and engendered new modalities of governing every aspect of cultural life in society, including minority cultural rights, freedom of expression, individuals’ lifestyle, and so on. Hence, by overlooking the “centrality of culture”[12] in relation to the question of the social, contemporary cultural debates tend to oversimplify the emergent neo-Ottoman cultural ensemble as nothing more than an ideological machinery of the neoconservative elite.

From neo-Ottomanism to Ottomentality

In order to more adequately assess the socio-political significance of Turkey’s emergent neo-Ottoman cultural phenomenon, I propose a conceptual shift from neo-Ottomanism to Ottomentality. This shift involves not only rethinking neo-Ottomanism as a form of governmentality, but also thinking neoliberal and neo-Ottoman rationalities in collaborative terms. Neo-Ottomanism is understood here as Turkey’s current form of neoconservatism, a prevalent political rationality that its governmental practices are not solely based on Islamic values, but also draws from and produces a new political culture that considers Ottoman-Islamic toleration and pluralism as the foundation of modern liberal multiculturalism in Turkey. Neoliberalism, in the same vein, far from a totalizing concept describing an established set of political ideology or economic policy, is conceived here as a historically and locally specific form of governmentality that must be analyzed by taking into account the multiple political forces which gave its unique shape in Turkey.[13] My claim is that when these two rationalities merge at the cultural domain, they engender a new art of government, which I call the government of culture and diversity.

This approach is therefore less concerned with a particular political ideology or the question of “how to govern,” but more about the “different styles of thought, their conditions of formation, the principles and knowledges that they borrow from and generate, the practices they consist of, how they are carried out, their contestations and alliances with other arts of governing.”[14] In light of this view, and for a practical purpose, Ottomentality is an alternative concept that I attempt to develop here to avoid the ambiguous meanings and analytical limitations of neo-Ottomanism. This concept underscores to the convergence of neoliberal and neo-Ottoman rationalities as well as the interrelated discourses, projects, policies, and strategies that are developed around them for regulating cultural activities and directing inter-ethnic and inter-religious relations in Turkey. It pays attention to the techniques and practices that have significant effects on the relationships of state, culture, and the social. It is concerned with the production of knowledge, or truth, based on which a new social reality of ‘freedom,’ ‘tolerance,’ and ‘multiculturalism’ in Turkey is constituted. Furthermore, it helps to identify the type of political subject, whose demand for cultural rights and participatory democracy is reduced to market terms and a narrow understanding of multiculturalism. And their criticism of this new social reality is increasingly subjected to judicial exclusion and discipline.

I shall note that Ottomentality is an authoritarian type of governmentality—a specific type of illiberal rule operated within the structure of modern liberal democracy. As Mitchell Dean notes, although the literature on governmentality has focused mainly on liberal democratic rules that are practiced through the individual subjects’ active role (as citizens) and exercise of freedom, there are also “non-liberal and explicitly authoritarian types of rule that seek to operate through obedient rather than free subjects, or, at a minimum, endeavor to neutralize any opposition to authority.”[15] He suggests that a useful way to approach to this type of governmentality would be to identify the practices and rationalities which “divide” or “exclude” those who are subjected to be governed.[16] According to Foucault’s notion of “dividing practices,” “[t]he subject is either divided inside himself or divided from others. This process objectivizes him. Examples are the mad and the sane, the sick and the healthy, the criminals and the ‘good boys’.”[17] Turkey’s growing neo-Ottoman cultural ensemble can be considered as such exclusionary practices, which seek to regulate the diversifying culture by dividing the subjects into categorical, if not polarized, segments based on their cultural differences. For instance, mundane practices such as going to the museums and watching television shows may produce subject positions which divide subjects into such categories as the pious and the secular, the moral and the degenerate, and the Sunni-Muslim-Turk and the ethno-religious minorities.

Reassessing the neo-Ottoman cultural ensemble through the lens of Ottomentality

In this final section, I propose a reassessment of the emergent neo-Ottoman cultural ensemble by looking beyond the conventional conceptions of neo-Ottomanism as “ideology” and “foreign policy.” Using the analytical concept of Ottomentality, I aim to examine the state’s changing role and governing rationality in culture, the discursive processes of knowledge production for rationalizing certain practices of government, and the techniques of constituting a particular type of citizenry who acts upon themselves in accordance with the established knowledge/truth. Nonetheless, before proceeding to an analysis of the government of culture and diversity, a brief overview of the larger context in which the AKP’s Ottomentality took shape would be helpful.

Context

Since the establishment of the Turkish republic, the state has played a major role in maintaining a homogeneous national identity by suppressing public claims of ethnic and religious differences through militaristic intervention. The state’s strict control of cultural life in society, in particular its assertive secularist approach to religion and ethnic conception of Turkish citizenship, has resulted in unsettling tensions between ethno-religious groups in the 1980s and 1990s, i.e. the Kurdish question and the 1997 “soft coup.” These social tensions indicated the limits of state-led modernization and secularization projects in accommodating ethnic and pious segments of society.[18] This was also a time when Turkey began to witness the declining authority of the founding ideology of Kemalism as an effect of economic and political liberalization. When the AKP came to power in 2002, one of the most urgent political questions was thus the “the limits of what the state can—or ought for its own good—reasonably demand of citizens […] to continue to make everyone internalize an ethnic conception of Turkishness.”[19] At this political juncture, it was clear that a more inclusive socio-political framework was necessary in order to mitigate the growing tension resulted in identity claims.

Apart from domestic affairs, a few vital transnational initiatives also took part in the AKP’s formulation of neoliberal and neo-Ottoman rationalities. First, in the aftermath of the attacks in New York on September 11 (9/11) in 2001, the Middle East and Muslim communities around the world became the target ofintensified political debates. In the midst of anti-Muslim and anti-terror propaganda, Turkey felt a need to rebuild its image by aligning with the United Nations’ (UN) resolution of “The Alliance of Civilizations,” which called for cross-cultural dialogue between countries through cultural exchange programs and transnational business partnership.[20] Turkey took on the leading role in this resolution and launched extensive developmental plans that were designated to rebuild Turkey’s image as a civilization of tolerance and peaceful co-existence.[21] The Ottoman-Islamic civilization, known for its legacy of cosmopolitanism and ethno-religious toleration, hence became an ideal trademark of Turkey for the project of “alliance of civilizations.”[22]

Second, Turkey’s accelerated EU negotiation between the late 1990s and mid 2000s provided a timely opportunity for the newly elected AKP government to launch “liberal-democratic reform,”[23] which would significantly transform the way culture was to be administered. Culture, among the prioritized areas of administrative reform, was now reorganized to comply with the EU integration plan. By incorporating the EU’s aspect of culture as a way of enhancing “freedom, democracy, solidarity and respect for diversity,”[24] the AKP-led national cultural policy would shift away from the state-centered, protectionist model of the Kemalist establishment towards one that highlights “principles of mutual tolerance, cultural variety, equality and opposition to discrimination.”[25]

Finally, the selection of Istanbul as 2010 European Capital of Culture (ECoC) is particularly worth noting as this event enabled local authorities to put into practice the neoliberal and neo-Ottoman governing rationalities through extensive urbanprojects and branding techniques. By sponsoring and showcasing different European cities each year, the ECoC program aims at promoting a multicultural European identity beyond national borders.[26] The 2010 Istanbul ECoC was an important opportunity for Turkey not only to promote its EU candidacy, but also for the local governments to pursue urban developmental projects.[27] Some of the newly formed Ottoman-themed cultural sites and productions were a part of the ECoC projects for branding Istanbul as cultural hub where the East and West meet. It is in this context that the interplay between the neoliberal and neo-Ottoman rationalities can be vividly observed in the form of neo-Ottoman cultural ensemble.

Strong state, culture, and the social

Given the contextual background mentioned above, one could argue that the AKP’s neoliberal and neo-Ottoman rationalities arose as critiques of the republican state’s excessive intervention in society’s cultural life. The transnational initiatives that required Turkey to adopt a liberal democratic paradigm have therefore given way to the formulation and convergence of these two forms of governmentalities that would significantly challenge the state-centered approach to culture as a means of governing the social. However, it would be inaccurate to claim that the AKP’s prioritization of private initiatives in cultural governance has effectively decentralized or democratized the cultural domain from the state’s authoritarian intervention and narrow definition of Turkish culture. Deregulation of culture entails sophisticated legislations concerning the roles of the state and civil society in cultural governance. Hence, for instance, the law of promotion of culture, the law of media censorship, and the new national cultural policy prepared by the Ministry of Culture and Tourism explicitly indicate not only a new vision of national culture, but also the roles of the state and civil society in promoting and preserving national culture. It shall be noted that culture as a governing technology is not an invention of the AKP government. Culture has always been a major area of administrative concern throughout the history of the Turkish republic. As Murat Katoğlu illustrates, during the early republic, culture was conceptualized as part of the state-led “public service” aimed to inform and educate the citizens.[28] Arts and culture were essential means for modernizing the nation; for instance,the state-run cultural institutions, i.e. state ballet, theater, museum, radio and television, “[indicate] the type of modern life style that the government was trying to advocate.”[29] Nonetheless, the role of the state, the status of culture, and the techniques of managing it have been transformed as Turkey undergoes neoliberal reform. In addition, Aksoy suggests that what distinguishes the AKP’s neoliberal mode of cultural governance from that of the early republic modernization project is that market mentality has become the administrative norm.[30] Culture now is reconceptualized as an asset for advancing Turkey in global competition and a site for exercising individual freedom rather than a mechanism of social engineering. And Turkey’s heritage of Ottoman-Islamic civilization in particular is utilized as a nation branding technique to enhance Turkey’s economy, rather than a corrupt past to be forgotten. To achieve the aim of efficient, hence good, governance, the AKP’s cultural governance has heavily relied on privatization as a means to limit state intervention. Thus, privatization has not only transformed culture into an integral part of the free market, but also redefined the state’s role as a facilitator of the culture market, rather than the main provider of cultural service to the public.

The state’s withdrawal from cultural service and prioritization of the civil society to take on the initiatives of preserving and promoting Turkish “cultural values and traditional arts”[31] lead to an immediate effect of the declining authority of the Kemalist cultural establishment. Since many of the previously state-run cultural institutions now are managed with corporate mentality, they begin to lose their status as state-centered institutions and significance in defining and maintaining a homogeneous Turkish culture that they once did. Instead, these institutions, together with other newly formed cultural sites and productions by private initiatives, are converted into a market place or cultural commodities in competition with each other. Hence, privatization of culture leads to the following consequences: First, it weakens and hollows out the 20th century notion of modern secular nation state, which sets a clear boundary confining religion within the private sphere. Second, it gives way to the neoconservative force, who “models state authority on [religious] authority, a pastoral relation of the state to its flock, and a concern with unified rather than balanced or checked state power.”[32] Finally, it converts social issues that are resulted from political actions into market terms and a sheer matter of culture, which is now left to personal choice.[33] As a result, far from a declining state, Ottomentality has constituted a strong state. In particular, neoliberal governance of the cultural field has enabled the ruling neoconservative government to mobilize a new set of political truth and norms for directing inter-ethnic and inter-religious relations in society.

New regime of truth

Central to Foucault’s notion of governmentality is “truth games”[34]—referring to the activities of knowledge production through which particular thoughts are rendered truthful and practices of government are made reasonable.[35] What Foucault calls the “regime of truth” is not concerned about facticity, but a coherent set of practices that connect different discourses and make sense of the political rationalities marking the “division between true and false.”[36] The neo-Ottoman cultural ensemble is a compelling case through which the AKP’s investment of thought, knowledge production, and truth telling can be observed. Two cases are particularly worth mentioning here as I work through the politics of truth in the AKP’s neoliberal governance of culture and neo-Ottoman management of diversity.

Between 2011 and 2014, the Turkish television historical drama Magnificent Century (Muhteşem Yüzyıl, Muhteşem hereafter), featuring the life of the Ottoman Sultan Süleyman, who is known for his legislative establishment in the 16th century Ottoman Empire, attracted wide viewership in Turkey and abroad, especially in the Balkans and Middle East. Although the show played a significant role in generating international interests in Turkey’s tourism, culinary, Ottoman-Islamicarts and history, etc. (which are the fundamental aims of the AKP-led national cultural policy to promote Turkey through arts and culture, including media export),[37] it received harsh criticism among some Ottoman(ist) historians and warning from the RTUK (Radio and Television Supreme Council, a key institution of media censorship and regulation in Turkey). The criticism included the show’s misrepresentation of the Sultan as a hedonist and its harm to moral and traditional values of society. Oktay Saral, an AKP deputy of Istanbul at the time, petitioned to the parliament for a law to ban the show. He said, “[The] law would […] show filmmakers [media practitioners] how to conduct their work in compliance with Turkish family structure and moral values without humiliating Turkish youth and children.”[38] Recep Tayyip Erdoğan (Prime Minister then) also stated, “[those] who toy with these [traditional] values would be taught a lesson within the premises of law.”[39] After his statement, the show was removed from in-flight-channels of national flag carrier Turkish Airlines.

Another popular media production, the 2012 blockbuster The Conquest 1453 (Fetih 1453, Fetih hereafter), which was acclaimed for its success in domestic and international box offices, also generated mixed receptions among Turkish and foreign audiences. Some critics in Turkey and European Christians criticized the film for its selective interpretation of the Ottoman conquest of Constantinople and offensive portrayal of the (Byzantine) Christians. The Greek weekly To Proto Thema denounced that the film served as a “conquest propaganda by the Turks” and “[failed] to show the mass killings of Greeks and the plunder of the land by the Turks.”[40] A Turkish critic also commented that the film portrays the “extreme patriotism” in Turkey “without any hint of […] tolerance sprinkled throughout [the film].”[41] Furthermore, a German Christian association campaigned to boycott the film. Meanwhile, the AKP officials on the contrary praised the film for its genuine representation of the conquest. As Bülent Arınç (Deputy Prime Minister then) stated, “This is truly the best film ever made in the past years.”[42] He also responded to the questions regarding the film’s historical accuracy, “This is a film, not a documentary. The film in general fairly represents all the events that occurred during the conquest as the way we know it.”[43]

When Muhteşem and Fetih are examined within the larger context in which the neo-Ottoman cultural ensemble is formed, the connections between particular types of knowledge and governmental practice become apparent. First, the cases of Muhteşem and Fetih reveal the saturation of market rationality as the basis for a new model of cultural governance. When culture is administered in market terms, it becomes a commodity for sale and promotion as well as an indicator of a number of things for measuring the performance of cultural governance. When Turkey’s culture, in particular Ottoman-Islamic cultural heritage, is converted into an asset and national brand to advance the country in global competition, the reputation and capital it generates become indicators of Turkey’s economic development and progress. The overt emphasis on economic growth, according to Irving Kristol, is one of the distinctive features that differentiate the neoconservatives from their conservative predecessors. He suggests that, for the neoconservatives, economic growth is what gives “modern democracies their legitimacy and durability.”[44] In the Turkish context, the rising neoconservative power, which consisted of a group of Islamists and secular, liberal intellectuals and entrepreneurs (at least in the early years of the AKP’s rule), had consistently focused on boosting Turkey’s economy. For them, economic development seems to have become the appropriate way of making “conservative politics suitable to governing a modern democracy.”[45] Henceforth, such high profile cultural productions as Muhteşem and Fetih are of valuable assets that serve the primary aim of the AKP-led cultural policy because they contribute to the growth in the related areas of tourism and culture industry by promoting Turkey at international level. Based on market rationality, as long as culture can generate productivity and profit, the government is doing a splendid job in governance. In other words, when neoliberal and neoconservative forces converge at the cultural domain, both culture and good governance are reduced to and measured by economic growth, which has become a synonym for democracy “equated with the existence of formal rights, especially private property rights; with the market; and with voting,” rather than political autonomy.[46]

Second, the AKP officials’ applause of Fetih on the one hand and criticism of Muhteşem on the other demonstrates their assertion of the moral-religious authority of the state. As the notion of nation state sovereignty has become weakened by the processes of economic liberalization and globalization, the boundary that separates religion and state has become blurred. As a result, religion becomes “de-privatized” and surges back into the public sphere.[47] This blurred boundary between religion and state has enabled the neoconservative AKP to establish links between religious authority and state authority as well as between religious truth and political truth.[48] These links are evident in the AKP officials’ various public statements declaring the government’s moral mission of sanitizing Turkish culture in accordance with Islamic and traditional values. For instance, as Erdoğan once reacted to his secular opponent’s comment about his interference in politics with religious views, “we [AKP] will raise a generation that is conservative and democratic and embraces the values and historical principles of its nation.”[49] According to his view, despite Muhteşem’s contribution of generating growth in industries of culture and tourism, it became subjected to censorship and legal action because its content did not comply with the governing authority’s moral mission. The controversy of Muhteşem illustrates the rise of a religion-based political truth in Turkey, which sees Islam as the main reference for directing society’s moral conduct and individual lifestyle. Henceforth, by rewarding desirable actions (i.e. with sponsorship law and tax incentives)[50] and punishing undesirable ones (i.e. through censorship, media ban, and jail term for media practitioners’ misconduct), the AKP-led reform of the cultural field constitutes a new type of political culture and truth—one that is based on moral-religious views rather than rational reasoning.

Moreover, the AKP officials’ support for Fetih reveals its endeavor in a neo-Ottomanist knowledge, which regards the 1453 Ottoman conquest of Constantinople as the foundation of modern liberal multiculturalism in Turkey. This knowledge perceives Islam as the centripetal force for enhancing social cohesion by transcending differences between faith and ethnic groups. It rejects candid and critical interpretations of history and insists on a singular view of Ottoman-Islamic pluralism and a pragmatic understanding of the relationship between religion and state.[51] It does not require historical accuracy since religious truth is cast as historical and political truth. For instance, a consistent, singular narrative of the conquest can be observed in such productions and sites as the Panorama 1453 History Museum, television series Fatih, and TRT children’s program Çınar. This narrative begins with Prophet Muhammad’s prophecy, which he received from the almighty Allah, that Constantinople would be conquered by a great Ottoman soldier. When history is narrated from a religious point of view, it becomes indisputable as it would imply challenge to religious truth, hence Allah’s will. Nonetheless, the neo-Ottomanist knowledge conceives the conquest as not only an Ottoman victory in the past, but an incontestable living truth in Turkey’s present. As Nevzat Bayhan, former general manager of Culture Inc. in association with the Istanbul Metropolitan Municipality (İBB Kültür A.Ş.), stated at the opening ceremony of Istanbul’s Panorama 1453 History Museum,

The conquest [of Istanbul] is not about taking over the city… but to make the city livable… and its populace happy. Today, Istanbul continues to present to the world as a place where Armenians, Syriacs, Kurds… Muslims, Jews, and Christians peacefully live together.[52]

Bayhan’s statement illustrates the significance of the 1453 conquest in the neo-Ottomanist knowledge because it marks the foundation of a culture of tolerance, diversity, and peaceful coexistence in Turkey. While the neo-Ottomanist knowledge may conveniently serve the branding purpose in the post-9/11 and ECoC contexts, I maintain that it more significantly rationalizes the governmental practices in reshaping the cultural conduct and multicultural relations in Turkey. The knowledge also produces a political norm of indifference—one that is reluctant to recognize ethno-religious differences among populace, uncritical of the limits of Islam-based toleration and multiculturalism, and more seriously, indifferent about state-sanctioned discrimination and violence against the ethno-religious minorities.

Ottomentality and its subject

The AKP’s practices of the government of culture and diversity constitute what Foucault calls the “technologies of the self—ways in which human beings come to understand and act upon themselves within certain regimes of authority and knowledge, and by means of certain techniques directed to self-improvement.”[53] The AKP’s neoliberal and neo-Ottoman rationalities share a similar aim as they both seek to produce a new set of ethnical code of social conduct and transform Turkish society into a particular kind, which is economically liberal and culturally conservative. They deploy different means to direct the governed in certain ways as to achieve the desired outcome. According to Foucault, the neoliberal style of government is based on the premise that “individuals should conduct their lives as an enterprise [and] should become entrepreneurs of themselves.”[54] Central to this style of government is the production of freedom—referring to the practices that are employed to produce the necessary condition for the individuals to be free and take on responsibility of caring for themselves. For instance, Nikolas Rose suggests that consumption, a form of governing technology, is often deployed to provide the individuals with a variety of choice for exercising freedom and self-improvement. As such, the subject citizens are now “active,” or “consumer” citizens, who understand their relationship with the others and conduct their life based on market mentality.[55] Unlike the republican citizens, whose rights, duties, and obligations areprimarily bond to the state, citizens as consumers “[are] to enact [their] democratic obligations as a form of consumption”[56] in the private sphere of the market.

The AKP’s neoliberal governance of culture hence has invested in liberalizing the cultural field by transforming it into a marketplace in order to create such a condition wherein citizens can enact their right to freedom and act upon themselves as a form of investment. The proliferation of the neo-Ottoman cultural ensemble in this regard can be understood as a new technology of the self as it creates a whole new field for the consumer citizens to exercise their freedom of choice (of identity, taste, and lifestyle) by providing them a variety of trendy Ottoman-themed cultural products, ranging from fashion to entertainment. This ensemble also constitutes a whole new imagery of the Ottoman legacy with which the consumer citizens may identify. Therefore, through participation within the cultural field, as artists, media practitioners, intellectuals, sponsors, or consumers, citizens are encouraged to think of themselves as free agents and their actions are a means for acquiring the necessary cultural capital to become cultivated and competent actors in the competitive market. This new technology of the self also has transformed the republican notion of Turkish citizenship to one that is activated upon individuals’ freedom of choice through cultural consumption at the marketplace.

Furthermore, as market mechanisms enhance the promulgation of moral-religious values, the consumer citizens are also offered a choice of identity as virtuous citizens, who should conduct their life and their relationship with the others based on Islamic traditions and values. Again, the public debate over the portrayal of the revered Sultan Süleyman as a hedonist in Muhteşem and the legal actions against the television producer, are exemplary of the disciplinary techniques for shaping individuals’ behaviors in line with conservative values. While consumer citizens exercise their freedom through cultural consumption, they are also reminded of their responsibility to preserve traditional moral value, family structure, and gender relations. Those who deviate from the norm are subjected to public condemnation and punishment.

Finally, as the neo-Ottomanist cultural ensemble reproduces and mediates a neo-Ottomanist knowledge in such commodities as the film Fetih and Panorama 1453 History Museum, consumer citizens are exposed to a new set of symbolic meanings of Ottoman-Islamic toleration, pluralism, and peaceful coexistence, albeit through a view of the Ottoman past fixated on its magnificence rather than its monstrosity.[57] This knowledge sets the ethical code for private citizens to think of themselves in relation to the other ethno-religious groups based on a hierarchical social order, which subordinates minorities to the rule of Sunni Islamic government. When this imagery of magnificence serves as the central component in nation branding, such as to align Turkey with the civilization of peace and co-existence in the post 9/11 and ECoC contexts, it encourages citizens to take pride and identify with their Ottoman-Islamic heritage. As such, Turkey’s nation branding perhaps also can be considered as a noveltechnology of the self as it requires citizens, be it business sectors, historians, or filmmakers, to take on their active role in building an image of tolerant and multicultural Turkey through arts and culture. It is in this regard that I consider the neo-Ottoman rationality as a form of “indirect rule of diversity”[58] as it produces a citizenry, who actively participates in the reproduction of neo-Ottomanist historiography and continues to remain uncritical about the “dark legacy of the Ottoman past.”[59] Consequently, Ottomentality has produced a type of subject that is constantly subjected to dividing techniques “that will divide populations and exclude certain categories from the status of the autonomous and rational person.”[60]

2016-10-5-1475705338

Gender Equality And Visual Activism

Gender equality is seen to be when men and women are equal to one another and have the same opportunities and rights, however, when it comes to it fairness for women is not as easy as it seems. It is important to elaborate on the effects of visual activism over the past century when referring specifically to gender equality: how this has been portrayed in photographs and the reflection of such inequality through visual activism. Being a woman in today’s society can often still be viewed and misinterpreted as less important than the males in society. However, this is a stark improvement on an era during the 1950s-1970s where it could be considered far more of a dominating difference in gender inequality. It could further be argued that during this period and – to a considerable degree – in today’s society, gender roles do contribute in part to individual judgement and treatment of another. However, it is argued to a more influential degree in the current social climate that this should not be the case and that women should be seen as equal to men. This chapter aims to focus upon the suggested sufferings that women have endured from past to present, reflecting on visual activism as a contributing factor in the both the suffering and movement in equality that is evidenced over this period.

Being a part of such a huge movement, whether that being a part of the activism or whether it be documenting it is just as important and allows for social movement and for people to look back on. Angela Davis (undated) states when talking about the Civil Rights Movement:

“I think the importance of doing activist work is precisely because it allows you to give back and to consider yourself not as a single individual who may have achieved whatever but to be a part of an ongoing historical movement.”

– Angela Davis, undated

Angela Davis was talking about the Civil Rights Movement within this statement, however, it applies for every movement. The women who were apart of the second-wave feminism movement were not just individuals but were fighting for an ongoing historical movement, being documented on fighting for equality for all women across the world. This documentation is used as a visual tool for the campaign to look back on the changes that were made.

Much to many peoples’ surprise, Martin Luther King Jr’s speech was inspired by a woman – her name was Mahalia Jackson. Thus, this suggests that without Mahalia, the famous ‘I have a Dream…’ speech may potentially never have occurred. Mahalia was a ‘soundtrack’ for the civil rights movement, as she accompanied Martin Luther King Jr to most of his rally’s, including the March in Washington in 1963 where she inspired him to deliver the speech. It was actually Mahalia who called out: “Tell them about the dream, Martin! Tell them about the dream!” (Jackson, 1963). Arguably, without the influence and inspiration of a woman, one of the most influential speeches at an activist march in history would never have been delivered.

The women’s liberation movement, also known as ‘second wave feminism’ in the 1950s-1960s occurred due to the civil rights movement. Martin Luther King Jr’s speech acted as a form of motivation – fighting for gender equality in American society, as well as coloured people fighting for equality for their races. Not only does The Civil Rights Act 1964 include race and colour, but it also includes discrimination due to sex, under ‘Title VII’. Many activist protests involving racial differences in America in the 1950s-60s motivated and encouraged many white middle-class women to create their own movement to allow for women’s equal rights, thus commencing the start of the second wave movement, an advertisement of the stereotypical ‘housewife’ by ‘Tide’ laundry detergent (1955) supports the 1950s ideology of domesticity emphasises the reason to begin the second-wave movement.

Betty Friedan published a book called The Feminine Mystique (The Feminine Mystique, Betty Friedan, 1963). This book became number one best seller; focusing on how women were stripped of their independence and individuality and how housewives lived this suburban apprehensive filled life that Friedan described as “the problem that has no name” (Friedan, 1963). As a consequence, the book raised awareness around this taboo topic. Women were living very domestic suburban lifestyles, stereotyping was normal procedure, and women were viewed – quite innocently – as the caring mother and housewife. Arguably, this didn’t cause mass concern in an era where women knew nothing different. The media also played a wealthy role in the way in which women were portrayed. An example of this is the ‘That’s what wives are for! Kenwood Chef, 1961’ advert. Such advertisements as these portrayed an apparent superiority in gender roles. It suggests that women were seen being there to please their husbands. The advert the roles of both genders are clearly specified – further evidence that suggests the 1960s era was a male dominated society. The media’s interpretation of women was that they were subordinate to the male gender, thus further emphasising the frequent stereotype. As a result of this, the second-wave feminism began to act in polar to this view of women’s roles and their fight commenced leading to further protests and the improvement of women’s equality, such as Lady Eleanor Roosevelt got pointed ‘Presidential Commission on the Status of Women’ (PCSW) in 1961 by John F. Kennedy, in order to explore the issues relating to women and make proposals on areas including employment, education, etc where discrimination against women because of their sex is unequal.

The mass media played a massive role in the shaping of how women now have equal rights. The media both exploited women in their representation, but also highlighted and further exploited the problems associated with degrading and representing women in such a way. Media, therefore, is used as a tool for visual activism and for social movement, an example of how stereotyping and gender equality has changed over the past century as it has been found that out of 450,000 professional chefs and head cooks 21.4% are females and 78.6% are males (Rocheleau, 2017), showing that the idea of the media and visual activism has helped improve the stereotyping of women as now more men are seen to be the ones who ‘cook’.

In 1968, the year the NAACP (National Association for the Advancement of Colored People) wanted to break the norms of the white beauty queen with a coloured queen, the Women’s Liberation Movement was picking their next target; a target in the form of ‘The Miss America Pageant’. On September 6th 1968, four hundred women activists assembled outside the Atlantic City Convention Centre, New Jersey, where the Miss America Pageant was taking place, demonstrating their rage towards the show and everything it stood for. The women taking part in the activist protest felt that the pageants were most interested in a woman’s physical appearance. Holding signs that rest “Everyone is Beautiful” and “Who Dares to Judge Beauty”. However, one of the most memorable photographs was “Members of the National Women’s Liberation Party protested the Miss America Pageant in Atlantic City, N.J. on Sept. 7, 1968” (photographer unknown).

Figure 1 – Members of the National Women’s Liberation Party protested the Miss America Pageant in Atlantic City, N.J. on Sept. 7, 1968, photographer unknown.

Within this photograph, two activists are holdings two signs side by side, one saying ‘Welcome to the Miss America Cattle Auction’ and the second a photograph of a woman from behind, ‘cut up’ into sections like you would on an animal, each section implying a new bit of meat, ‘rib’, ‘rump’, ‘loin’, etc. Next to the woman in the photograph, reads a quote ‘Break the Dull Steak Habit’ emphasising the misogyny of the photograph, implying that women within the Miss America Pageant are being treated as a piece of meat and judged on their appearance and quality. Many would argue that this suggests women and animals are to be treated equally, whereas men are more superior. The poster on the right is a poster by ‘Cattle Baron 1968’. This photograph demonstrates an activist protest that labelled the pageant as a ‘sexist cattle auction’, where women walk around in swimming costumes and are judged by men on their physical appearance, just as cow/sheep/pig would in a cattle auction. To accentuate the animal aspect, the activists crowned a sheep live as ‘Miss America’. Other signs had ‘Racism is Roses’, suggesting that the activists believe that the pageant has racist beauty standards within their competition – due to never crowning anyone other than white women – showing that women were treated unequally, but black women, even more so. The protested was described by Sheila Rowbotham in her ‘Women in Movement’ 1992 book as a:

“rebellion of a radicalised student generation against a manufactured and commercialised ideal of female beauty, in a land that specialised in making dreams into images on a mass scale, received a glare of publicity”

– Rowbotham, 1992

This event marked the end of the dismissal of rights movements and made women’s liberation, equality and beauty norms topics world discussion; supporting the idea of visual activism, in the form of photography, is and has been for many years, a tool for advancement in societies views of women. Arguably, there is still a long way to go at this point, however, this has shown a progressive step forward in gender equality. In terms of the Miss America Pageant itself, the protest did not alter the nature of this particular event, on the other hand, it has forced feminism beauty norms and inequality into humanity’s consciousness and has allowed for beauty standards not to be so restricting in the modern day. Considering this, this movement and activism – visually or not – has not completely altered the idea of looking a certain way in order to ‘fit in’.

Images of women used to be tailored specifically for the eyes of the male, also known as the male gaze. The photographs which the media distributed included women with slender long thing legs, thin thighs, perfectly round breasts and have flawless skin, this beauty myth evolved from ‘Barbie’, the unrealistic measurements of her body, the tiny invisible waist, big boobs and skinny legs is what society were trying to achieve. Women looked up to the idea of the ‘beauty myth’ (Naomi Wolf, The Beauty Myth, 1990), something which was and still is verging on impossible to achieve but desired by many. The idea of women’s liberation during this time could arguably have been seen as non-existent. The idea of the beauty myth was brought to our attention by Naomi Wolf, 1990, where Wolf discusses the idea of how women conform to the physical constraints of trying to look a certain way due to the pressure of the mass media and to impress the male figure. However, Wolf discusses how this ‘beauty ideal’ is influenced by the media, when she states:

“What editors are obliged to say that men want from women is actually what their advertisers want from women.”

– Naomi Wolf, The Beauty Myth, page 23

This meaning that men do not actually have any idea what the ‘ideal’ woman is, but the advertisers tell them through magazines etc. that it should be like this. This demonstrates that visual activism (such as advertisements and magazines) can also have a negative impact on society, therefore not always allowing for the right social change.

Many girls within the 21st century still desire to look within this norm and social pressure to impress the male gender; this will be discussed further in the third chapter.

In the 1960s-1970s gender roles and norms were socially enforced by all, women were seen to be the ‘housewife’ and carer for both their husband and their children, only 27% of women at this stage were working outside the house (U.S. Department of Labor, undated). Many activists were fighting for their equality in all areas, this included the workplace, legalisation of abortion, equal pay and many others. These protests were seen to be transformative, the 1970s was seen as a time where change for women and the women’s liberation movement was at its highest. The gender pay gap has been a problem since the end of the 19th century, the women who were not at home caring for the children and doing house chores were working, however, women were seen to be doing the same jobs as men with extremely less pay, signifying the clear segregated treated between men and women.

However, on August 26th 1970, the 50 year anniversary of the passage of the 19th amendment, allowing the right for women to vote and women’s suffrage, 50,000 women and feminists marched, arms linked, down New York City’s Fifth Avenue during rush hour. Another activist protest in which Times magazine states “No one knows how many shirts lay wrinkling in laundry baskets last week as thousands of women across the country turned out for the first big demonstration of the Women’s Liberation movement.” (Time, 1970). Betty Friedan, then president of the National Organisation for Women (NOW) asked all women to stop working for the day to bring awareness to the problem of unequal pay in the workplace and led the famous Women’s Strike for Equality 1970, the largest rally since the suffrages protests. Whilst the women were chanting, shouting, waving flags and holding signs with slogans that stated “Don’t Iron While the Strike is Hot!” (Time, 1970); Eugene Gordon took one of the most iconic photographs of the Equality protest, which can be used a visual campaign tool allowing for social change. The photograph is entitled Women Strike for Peace at the Women’s Strike for Equality Demonstration in New York, 1970 (Gordon).

Figure 2 – Eugene Gordon, Women Strike for Peace at the Women’s Strike for Equality Demonstration in New York, 1970.

Within this photograph women of all ages and races strike and campaign for general equal rights against men, arms linked together parading down fifth avenue New York 1970. This protest took place whilst the Vietnam War was happening, where women on the battlefield were being subjected to stereotypical standards. The aim for this protest was to be a voice for all women all over the world, that everyone had the equal right to be heard and should not be treated differently due to sexual orientation, gender or race. The most powerful message ‘Women Strike for Peace- and Equality!’ being held high into the air, not being able to miss the eyes of the viewer. The women within this photograph serve as a powerful symbolic gesture of the Women’s Liberation Movement. On the left hand side to the back of the photograph is a man, this demonstrating that not only were women protesting for change for equality for all but men were protesting right there with them; women protesting for equality in this scale is a powerful and courageous movement to make but men protesting as well is an even more powerful message demanding freedom and equality for women. The stern intense looks spread across their faces shows to the world they are not messing around and that they mean business and as Betty Friedan stated in her speech on the day of the strike, “Today has called this strike to confront the unfinished business of our equality” (Friedan, 1970), these women and men within this photograph symbolise this exact speech and are challenging anyone who believes that they are not equal to men.

This photograph, as well as many others by Gordon and other photographers, are visual activism photographs. These photographs allowed for women to gain more equality and made the feminist liberation movement visible for everyone. The increase in women working outside of the house rose a huge amount, from 27% in 1960 to 54% in 1980 and then to 70% in 2012 (according to the U.S. Department of Labor, undated). As well as this the U.S. Supreme Court held that a working environment can be announced hostile or abusive due to discrimination based on a person’s sex, which is helpful in sexual harassment cases. Therefore, distributing such photographs as these can be used a visual campaign tool in order to exhibit to the world what unlawful acts are taking place and allowing for women to be seen as more equal. It has also led to further protests such as International Women’s Day Coalition’ – March 8th 1975, in protest for equal rights, in which people all over the world still march on each March 8th in the 21st century, to protest women’s fairness and equality. However, even after Women’s Strike for Equality, 1970, the mass media were still distributing sexist advertisements about women being the ‘housewife’, showing signs of little or no change. An example of this is from an advertisement in the Chicago Tribune Magazine – May 13th, 1973, advertising ovens – the advertisement shows a woman walking around her kitchen next to a new sparkly oven with the guests and her husband (who is sat at the head of the table) waiting on her to serve them; this demonstrating the woman is still seen as the housewife and homemaker.

The gender pay gap and workplace discrimination is still evident within the modern day working environment, by no means is it as bad as it was, however in the modern day systems you would believe it to be better than it is; this will be discussed further in the third chapter.

Chapter 2 – Racial Equality in the 20th century

Within America and the rest of the world, racial inequality has always been visible; the 1950’s was a time when the segregation of the black part of society was at its utmost. Within this chapter I will be discussing how visual activism and photography has played a role in the racial social movement of America in the 1950s and 1960s leading to a much fairer and equal society for all citizens.

“I have a dream that my four children will one day live in a nation where they will not be judged by the colour of their skin but by the context of their character. I have a dream…”

Martin Luther King, JR, 1963, ‘I have a Dream…’ Speech at the Washington March 1963

On August 28th 1963 these famous words were spoken by the influential Martin Luther King, Jr. to 200,000 civil right supporters on the steps of the Lincoln memorial during the March in Washington DC. Many of the activists words spoken were to voice the racial segregation and inequality African Americans were experiencing, he was an untiring activist for civil and economic rights for all African Americans; his extremely passionate speech was in the hope of a brighter and less segregated future for all black people and truly was the beginning of the Civil Rights Movement. An example of change due to Activist Martin Luther King Jr and his speech to all civil rights supporters, was the Montgomery, Alabama boycott of the city buses where King’s speech drew a lot of attention and for many supporters to get rallied up and eventually led to the bus companies in the South to examine their rules and regulations, in due course leading for them to change the rules, in turn slowly integrating black and whites into the same life. Many photographers from this era captured inspiring photographs to support Martin Luther King Jr’s speech, including Don Cravens’ Black Residents Walking, Montgomery Bus Boycott, 1955 photograph, by documenting these peaceful protests is a way to spread the famous words of Martin Luther King, not only is it a way to look back on the changes that occurred but also acts as a visual tool for the campaign. Raidford 2011, supports the idea of how photography has shifted the balance of civil rights in the civil rights era:

“Images of Alabama bus stations each reveal how vulnerable African Americans were when demonstrating for the most basic and fundamental of rights. They laid bare to nonblack audiences what African Americans of the Jim Crow era had long known, seen, and experienced. With bright enough lights and an army of cameras trained in the right direction, images were central to changing public opinion about the violent entrenchment of white supremacy in the South and that system’s overdetermination of black life and possibility. The visual proved a tool as effective as bus boycotts and as righteous as nonviolence.”

– Leigh Raiford, 2011

During the 1950s, America experienced an era of intense conflict and a debated divide between two very different races, this saw segregation for black citizens at its utmost. Even though unequal acts were taking place every day, the battle for equality had been going on for centuries, however before the 1950s not much change had been made and the black community still lived in fear of the white Americans. The need for change came about when segregation was accepted as the norm in everyday life and provoked the idea of breaking the prevailing pattern of racial inequality. Examples of racial inequality included such things as voting, education and the use of public facilities due to most white citizens believing that the black community was inferior in every possible way; they were seen as second-class citizens. North and South America almost lived in two completely different decades, the South was very much governed by one man’s laws, also known as the Jim Crow Laws; which meant there was to be no integration of black and white people.

Gordon Parks was an inspirational photographer who exposed white America to exhibit the injustice of black people governed under the Jim Crow Laws, his photograph ‘Outside Looking In, Mobile, Alabama, 1956’ demonstrates the injustice and social discrimination the black community faced daily, within this photograph six black children gaze into the distance at a white only playground, in which they are not allowed to step foot. Parks’ work is monumental in the fact it documents and explains the most significant phases of American culture in the 1950s and demonstrates that photography is a form of visual activism and does in some sense allow for social movement due to the shock these photographs caused the world, this is agreed with by W. Eugene Smith when he states:

“Photography is a small voice, at best, but sometimes one photograph, or a group of them, can lure our sense of awareness.”

– W. Eugene Smith, undated.

Meaning that with the capture of one important act through visual activism, photography, can raise awareness for all, demonstrating that visual activism really is a tool for social movement.

The Jim Crow Laws 1865 were an assembly of state and local laws that legalised the racial segregation and equality that was very apparent in this era (History.com editors, February 2018). The Jim Crow Laws meant that white and black people had to live seperately. The main Laws that were required to be followed by the black community were public schools and public facilities, meaning that water fountains, toilets and all forms of public transportation were to be separate. The black community were only allowed to use things marked ‘colored only’. An example of this is a documentary photograph taken by Elliot Erwitt, named ‘Segregated Water Fountains, 1950’.

Figure 1 – Elliot Erwitt, Segregated Water Fountains, 1950.

This powerful photograph is a form of visual activism. It arguably demonstrates to humanity the inequality that was taking place in America, predominantly in the 1950s. This photographic image appears to tell a story and speaks volumes with reflection on the injustice of how society followed and allowed rules to dictate the way two societies lived their lives. At the time that this photograph was taken, it could be suggested that change was paramount in the equality movement. When analysing this photograph in the 21st century, individuals are able to reflect on how much has changed due to exploiting visual activism photographs such as this. The viewer of this photograph does not need any background information, the photograph presents itself in a variety of ways to different individuals – On the left side, a much more clean luxurious water fountain – from behind comes a rusted, old, corroding pipe that connects to a dirty basin-style water fountain on the right hand side, just a couple of feet away. The luxurious water fountain is slightly higher than the dirty, rundown water fountain, this explicitly suggests a clear state of authority and class of the white people, linking back to Martin Luther King Jr’s idea of need for change and that people should be ‘judged by the context of their character and not the colour of their skin’, therefore meaning white and black people should get equal treatment and both be able to drink from the more luxurious water fountain. Above the two water fountains are two signs that state ‘white’ and ‘colored’, the white sign being positioned slightly higher and bigger than the ‘colored’ sign. The black man drinking from the water fountain is slightly blurred, but appears to be glancing at the water fountain to the left of him, this could also be interpreted as a glance to see who is watching him, potentially suggesting an apparent fear of his status. It could be argued that this grainy, black and white photograph is almost telling the emotions of the man within the photograph, a dull and grey photograph, to represent the feelings of the man having to drink from a corroding water fountain, whereas the white community are indulging in a much higher standard facility.

This visual activism photograph is a clear reminder for viewers of Americas heavily influenced past, without using any words. Photographs such as these are a tool for social movement within humanity; visual activism here has, to a degree, contributed to the change in equality in the black community – to live an equal life in the future. In his ‘I have a Dream…’ speech in 1963 Martin Luther King stated “With this faith we will be able to work together, to pray together, to struggle together, to go to jail together, to stand up for freedom together, knowing that we will be free one day.” Documenting photographs such as this one by Elliot Erwitt has allowed for social movement and has allowed for the idea of ‘being free’ and Martin Luther King Jr’s dream to become real.

Although, not only were the black community segregated in everyday life, they were also isolated in other areas of society, including the right to an equal education. Racial discrimination provides a means of preserving the economic rewards and superior social position of the ethically authoritative white population, which included having the chance of a first-class education in comparison to what the black population were receiving at this time. However, in 1954 the Supreme Court set precedent in the Brown vs Board of Education, that segregation of schools was unconstitutional, meaning that authority violated their duty to provide fair and equal education for the African-American community. This was shown when the Supreme Court stated:

“To a large extent, teachers who have had extensive experience and educational opportunities are concentrated in middle class white and Asian schools, which increases inequality, placing less experienced teachers and many teachers of color in the schools that need highly experienced teachers the most, and denying white and Asian students the opportunity to learn from a truly diverse faculty”.

– Earl Warren, 1954, Brown vs Board of Education case.

This case marked the first success for the African American community in terms of equality and started the 12-year-long Civil Rights Movement, which by many across the country accepted with delight, on the other hand the same was not to be said in the ‘deep south’, which included states such as Alabama, who strictly followed the ‘Jim Crow Laws’. The Brown vs Board of Education case ruled that black students were allowed to attend public schools with white students, however, many states disregarded the law. A very famous case was the Little Rock Nine 1957. After pressure from both the Brown case and the National Association for the Advancement of Colored People (NAACP) Little Rock Central High School, in Little Rock, Arkansas, implemented a plan to slowly integrate black students into the school. Nine African-American students were specifically chosen to be integrated into Little Rock High School, they were known as the Little Rock Nine. They received counselling beforehand to ensure they knew what aggressive and racist behaviour they would encounter from the white community. The Little Rock Nine, including Elizabeth Eckford, attended school for the first time on September 4th 1957, however, never actually made it into the school.

A very famous civil rights photograph was ‘Elizabeth Eckford and Hazel Bryan’ by Will Counts was of Eckford, who had arrived at the school first.

Figure 2 – Will Counts, Elizabeth Eckford and Hazel Bryan, 1957.

Within the photograph Eckford walks ahead of a mob of white girls, boys and guards in a handmade skirt which she had made especially for her first day of school; following her crowds of people screamed abuse and taunting her. One girl in particular, Hazel Bryan, Elizabeth’s tormentor, stands directly behind her in a light coloured dress, she being the main focus of the photograph, her face full of hate, poison, screaming nicknames; Bryan was the epitome of the Jim Crow Laws.

The crowds marching alongside ring leader Hazel, called out for her to be lynched and screamed chants such as “Two, four, six, eight, we ain’t gonna integrate!” (Debenport, 1982) and mothers shouting to their children “Don’t stay in there with those ni**ers!” (Debenport, 1982) and then Eckford turned around for an elderly woman to spit at her face. Counts exceptionally captured the storm of the Jim Crow South. Even whilst walking alone in a crowd full of only white people, Eckford still contained her dignity and pride and continued walking to gain entry to Little Rock Central High, Bryan fails to get a reaction from Eckford.

Journalists photographed the abuse that the Little Rock Nine received and the pure hatred the white community threw their way, especially Eckford, however, by recording this event can be argued as one of the biggest causes of change for racism and social integration and changed the fight for integration and desegregation forever. Showing that visual activism, in the form of photography, is a tool to allow for social change and allowed for a less segregated future for all black people and was truly a turning point for African-Americans within the Civil Rights era. This photograph demonstrates the ability of a peaceful activist protest and in turn is a visual metaphor for the segregation and isolation between black and white community in the civil rights era.

However, it does not stop there. After completing the year at school the Little Rock Nine continued to get abuse throughout the year and were treated very poorly. After just one year of integration, the governor of Arkansas not only stopped integration of schools but actually closed all the public schools in Little Rock down. He stated it was ‘better to have no schools at all than to have integrated schools’. These schools were closed for a year and reopened the following year, people blamed the Little Rock Nine for the closures of their schools and the racial tension got worse in the coming years.

August 20th 1959 was the Little Rock Rally at State Capitol. This activist protest was about the white population protesting for the admission of the Little Rock Nine and for the loss of education due to these nine African-Americans and unequal education for the black community. A photograph to show the racial isolation of African-Americans is a photograph by John T. Bledsoe, entitled ‘Little Rock, 1959. Rally at State Capitol’.

Figure 3 – John T Bledsoe, Little Rock, Rally at State Capitol 1959’.

The photograph shows a big portion of white population, mainly Little Rock Highs parents, holding aggressive signs and American flags in the protest of the integration of the Little Rock Nine. One point that cannot be avoided within this photograph is there is not one black person, most of the people within it are white males; therefore, not only was it very white dominated, on top of that men were very much in charge over white women and the whole of the black population. The American flags being held in place to represent what they believe America is, a white and black separated society. The signs reading ‘Race Mixing is Communism’ are not only about the distribution of many types of privilege, but is mainly about white privilege; however, the white supremacists within this protest did not get there way, this only led to all schools being shut down. The racial inequality the white population of Southern American exhibited shocked the world.

Such photographs such as these from the 1957-1959 Little Rock era are ones that are never going to be ignored globally, due to it being such a big disgrace; the photograph of Elizabeth and Hazel is drama that will never end, they are forced to be tied together forever due to the misfortune not just one girl received but every African-American in South America, symbolised by visual activist photographs the divide of racial America. Hazel Bryan received the wrong sort of fame and to this day even after many years of showing she has changed she is still labelled a racist and is known as the girl who shamed America; shocking the world in the newspapers in the following days. This was all due to Counts capturing the hate and poison spread across her face, showing that one single photograph can be used as a tool for social movement. Also spreading the idea of Martin Luther King’s repetitive message of “rise from the dark and desolate valley of segregation to the sunlit path of racial justice” (King, 1963), meaning justice was required to grow from the cruel segregation of 1950/1960s America and to all for all Americans to be equal not just the white population of society, which in turn occurred due to the vast circulation of these sort of photographs of racial isolation over the media.

2019-1-10-1547125685

Gender and Caste – The Cry for Identity of Women

INTRODUCTION

‘Bodies are just not biological phenomena but a complex social creation onto which meanings have been variously composed and imposed according to time and space’. These social creations differentiate the two biological personalities into Man and Woman and meanings to their qualities are imposed on the basis of gender which defines them as He and She.

The question then arises a woman ‘ who is she? According to me, a woman is the one who is empowered, enlightened, enthusiastic and energetic. A woman is all about sharing. She is an exceptional personality who encourages and embraces. If a woman is considered to be a mark of patience and courage then why even today there is a lack of identity in her personality. She is subordinated to man and often discriminated on gender basis.

The entire life of a woman revolves around the patriarchal existence as she is dominated by her father in the childhood, in the other phase of her life she is dominated by her husband and in the later phase by her son, which gives no space to her own independence.

The psychological and physical identity of a woman is defined through the role and control of men: the terrible trait of father-husband-son. The boundary of women is always restrained by male dominance. Gender discrimination is not only a historical concept but it still has its existence in the contemporary Indian Society.

Indian society in every part of its existence experiences the ferocious gender conflict which is everyday projected in the daily newspapers, news channels or even walking on the streets of Indian society. The horror of patriarchal domination exists in every corner of the Indian society. The role of Indian women has always been declining over the centuries.

Turning the pages of history, in the pre-Aryan India God was female and life was being represented in the form of mother Earth. People worshipped the mother Goddess for fertility symbols. The Shakti cult of Hinduism says women as the source and embodiment of cosmic power and energy. Woman power can also be shown through Goddess Durga who lured her husband Shiva from asceticism.

The religious and social condition abruptly changed when the Aryan Brahmins eliminated the Shakti cult and power was given in the hands of male group. They considered the male deities as the husbands of the female goddess providing the dominance in the hands of the male. Marriage was involvement of male control over female sexuality. Even the identity of mother goddess was dominated by the male gods. As Mrinal Pande writes, ‘to control women, it becomes necessary to control the womb and so Hinduism, Judaism, Islam and Christianity have all Stipulated, at one time or another, that the whole area of reproductive activity must be firmly monitored by law and lawmakers’ .

The issue of identity crisis for a woman

The identity of a woman is erased as she becomes a mere reproductive machine ruled and dominated by male laws. From the time she takes birth she is taught that one day, she has to get married and go to her husband’s house. Neither thus she belongs to her own house nor to her husband’s house leaving a mark on her identity. The Vedic times, however proved to be a boon in the lives of women as they enjoyed freedom of choice in aspect of husbands and could marry at mature age. Widows could remarry and women could divorce.

The segregation of women continued to raise the same question of identity as in the Chandogya Upanishad, a religious text of the pre-Buddhist era, contains a prayer of spiritual aspirants which says ‘May I never, ever, enter that reddish, white, toothless, slippery and slimy yoni of the woman’. During this time control over women included reclusion and exclusion and they were even denied education. Women and shudras were treated as the minority class in the society. Rights and privileges given to women were cancelled and girls were married at a very early age. Caste structure also played a great role as women were now discriminated within their own caste on gender basis.

According to Liddle, women were controlled under two aspects: firstly, they were disinherited from ancestral property, economy and were expected to remain under the domestic sphere known as purdah. The second aspect was the control of men over female sexuality. The death rituals of the family members were performed by the sons and no daughter had the right to fire their parent funeral.

A stifling patriarchal shadow hangs over the lives of ladies all through India. From all areas, ranks and classes of society, ladies are casualty of its oppressive, controlling impacts. Those subjected to the heaviest weight of separation are from the Dalit or “Planned Castes”, referred to in less liberal vote based times as the “Untouchables”. The name may have been banned however pervasive negative mentalities of psyche stay, as do the amazing levels of misuse and subjugation experienced by Dalit ladies. They encounter different levels of segregation and misuse, a lot of which is primitive, debasing, horrifyingly vicious and absolutely obtuse. The divisive position framework ‘ in operation all through India, “Old” and “New” ‘ together with biased sexual orientation demeanors, sits at the heart of the colossal human rights manhandle experienced by Dalit or “outcaste” ladies.

The lower positions are isolated from different individuals from the group, precluded from eating with “higher” standings, from utilizing town wells and lakes, entering town sanctuaries and higher rank houses, wearing shoes or notwithstanding holding umbrellas before higher stations; they are compelled to sit alone and use distinctive porcelain in eateries, restricted from cycling a bike inside their town and are made to cover their dead in a different cemetery. They every now and again confront ousting from their territory by higher “overwhelming” stations, compelling them to live on the edges of towns frequently on fruitless area.

This plenty of preference add up to politically-sanctioned racial segregation, and the time has come ‘ long past due ‘ that the “popularity based” legislature of India authorized existing enactment and cleansed the nation of the guiltiness of position and sexual orientation based separation and abuse.

The strategic maneuver of patriarchy soaks each range of Indian culture and offers ascend to an assortment of unfair practices, for example, female child murder, victimization young ladies and shares related passing. It is a noteworthy reason for misuse and manhandle of ladies, with a lot of sexual brutality being executed by men in positions of force. These reach from higher position men damaging lower rank ladies, particularly Dalit; policemen abusing ladies from poor family units; and military men mishandling Dalit and Adivasi ladies in rebellion states, for example, Kashmir, Chhattisgarh, Jharkhand, Orissa and Manipur. Security faculty are ensured by the generally condemned Armed Forces Special Powers Act, which gifts exemption to police and individuals from the military completing criminal demonstrations of assault and to be sure murder; it was proclaimed by the British in 1942 as a crisis measure, to stifle the Quit India Movement. It is an unreasonable law, which needs canceling.

In December 2012 the intolerable posse assault and mutilation of a 23-year-old paramedical understudy in New Delhi, who consequently kicked the bucket from her wounds, collected overall media consideration, putting a transient focus on the risks, persecution and shocking treatment ladies in India confront each day. Assault is endemic in the nation. With most instances of assault going unreported and numerous being released by police, the genuine figure could be 10 times this. The ladies most at danger of misuse are Dalit: the NCRB gauges that more than four Dalit-ladies are assaulted each day in India. An UN study uncovers that “the lion’s share of Dalit ladies report having confronted one or more episodes of verbal misuse (62.4 for every penny), physical attack (54.8 for each penny), inappropriate behavior and strike (46.8 for each penny), aggressive behavior at home (43.0 for every penny) and assault (23.2 for every penny)”. They are subjected to “assault, attack, seizing, snatching, crime physical and mental torment, shameless movement and sexual misuse.”

The UN found that extensive numbers were deterred from looking for equity: in 17 for each penny of occasions of savagery (counting assault) casualties were blocked from reporting the wrongdoing by the police; in more than 25 for each penny of cases the group ceased ladies recording grumblings; and in more than 40 for each penny ladies “did not endeavor to get legitimate or group solutions for the brutality basically out of apprehension of the culprits or social disrespect if (sexual) viciousness was uncovered”. In just 1 for every penny of recorded cases were the culprits sentenced. What “takes after episodes of viciousness”, the UN found, is “a resonating hush”. The impact with regards to Dalit ladies particularly, however not solely, “is the creation and upkeep of a society of brutality, quiet and exemption”.

Class discrimination faced by women of contemporary time

The Indian constitution clarifies the “rule of non-separation on the premise of rank or sexual orientation”. It promises the “privilege to life and to security of life”. Article 46 particularly “shields Dalit from social unfairness and all types of abuse”. Add to this the imperative Scheduled Castes and Tribes (Prevention of Atrocities) Act of 1989, and an around outfitted administrative armed force is framed. Notwithstanding, in view of “low levels of execution”, the UN expresses, “the procurements that secure ladies’ rights must be viewed as vacant of importance”. It is a commonplace Indian story: legal impassion (and cost, absence of access to lawful representation, interminable formality and obstructive staff), police defilement, and government arrangement, in addition to media lack of interest bringing on the significant hindrances to equity and the perception and implementation of the law.

Not at all like white collar class young ladies, Dalit assault casualties (whose numbers are developing) once in a while get the consideration of the rank/class-cognizant urban-driven media, whose essential concern is to advance a Bollywood gleaming, open-for-business picture of the nation.

A 20-year-old Dalit lady from the Santali tribal gathering in West Bengal was group assaulted, supposedly “on the requests of town senior citizens who questioned her relationship (which had been going ahead in mystery for a long time) with a man from an adjacent town in the Bird hum locale”. The savage occurrence happened while, as indicated by a BBC report, the man went to the lady’s home’ with the proposition of marriage, villagers spotted him and sorted out a kangaroo court. Amid the “procedures” the couple were made to sit with situation is anything but hopeful’ the headman of the lady’s town fined the couple 25,000 rupees (400 US dollars; GBP 240) for “the wrongdoing of experiencing passionate feelings for. The man paid, however the lady’s family were not able pay. Subsequently, the “headman” and 12 of his companions more than once assaulted her. Brutality, abuse and prohibition are utilized to keep Dalit ladies in a position of subordination and to keep up the patriarchal grasp on force all through Indian culture.

The urban areas are unsafe spots for ladies, yet it is in the farmland, where a great many people live (70 for each penny) that the best levels of misuse happen. Numerous living in country zones live in amazing neediness (800 million individuals in India live on under 2.50 dollars a day), with practically no entrance to medicinal services, poor instruction and horrifying or non-existent sanitation. It is a world separated from law based Delhi, or Westernized Mumbai: water, power, majority rule government and the tenet of law are yet to venture into the lives of the ladies in India’s towns, which home, Mahatma Gandhi broadly proclaimed, to the spirit of the nation.

Nothing unexpected, then, that following two many years of monetary development, India winds up moping 136th (of 186 nations) in the (sex fairness balanced) United Nations Human Development record’ Harsh thoughts of sexual orientation imbalance

Indian culture is isolated in numerous ways: position/class, sexual orientation, riches and neediness, and religion. Dug in patriarchy and sex divisions, which esteem young men over young ladies and keep men and ladies and young men and young ladies separated, join with tyke marriage to add to the formation of a general public in which sexual misuse and abuse of ladies, especially Dalit ladies, is an adequate piece of ordinary life.

Sociologically and mentally molded into division, schoolchildren separate themselves along sex lines; in numerous territories ladies sit on one side of transports, men another; unique ladies just carriages have been introduced on the Delhi and Mumbai metro, acquainted with shield ladies from inappropriate behavior or “eve teasing” as it is conversationally known. Such wellbeing measures, while being invited by ladies and ladies’ gatherings, don’t manage the basic reasons for misuse, and as it were may promote kindle them.

Assault, sexual brutality, attack and provocation are overflowing, at the same time, with the special case maybe of the Bollywood Mumbai set, sex is a forbidden subject. A survey by India Today directed in 2011 found that 25 for every penny of individuals had no complaint to sex before marriage, giving it’s not in their family.

Sociological partition energizes sex divisions, bolsters biased generalizations and feeds sexual constraint, which numerous ladies’ association trust represents the high rate of sexual viciousness. A recent report, did by the International Center for Research on Women, of men’s mentalities in India towards ladies created some startling measurements: one in four conceded having “utilized sexual brutality (against an accomplice or against any lady)”, one in five reported utilizing “sexual savagery against a stable [female] accomplice”. Half of men would prefer not to see sexual orientation correspondence, 80 for each penny respect evolving nappies, nourishing and washing youngsters to be “ladies’ work”, and a minor 16 for every penny have influence in family obligations. Added to these repressing states of mind of psyche, homophobia is the standard, with 92 for every penny admitting they would be embarrassed to have a gay companion, or even be in the region of a gay man.

With everything taken into account, India is cursed by an inventory of Victorian sex generalizations, fuelled by a position framework intended to oppress, which trap both men and ladies into molded cells of detachment where ruinous thoughts of sex are permitted to age, bringing about blasts of sexual brutality, misuse and man handle. Investigations of position have started to draw in with issues of rights, assets, and acknowledgment/representation, showing the degree to which position must be perceived as key to the account of India’s political advancement. For instance, researchers are getting to be progressively mindful of the degree to which radical masterminds.

Ambedkar, Periyar, and Phule requested the acknowledgment of histories of misuse, custom derision, and political disappointment as constituting the lives of the lower-ranks, even all things considered histories additionally framed the loaded past from which get away was looked for.

Researchers have indicated Mandal as the developmental minute in the “new” national governmental issues of station, particularly for having radicalized dalitbahujans in the politically critical states of the Hindi belt. Hence Mandal may be an advantageous, despite the fact that overdetermined vantage-indicate from which break down the state’s conflicting and incapable interest in the talk of lower-rank qualification, tossing open to examination the political practices and philosophies that enliven parliamentary vote based system in India as a recorded arrangement.

Tharu and Niranjana (1996) have noticed the perceivability of station also, sexual orientation issues in the post-Mandal connection and depict it as a opposing arrangement. Case in point, there were battles by upper-station ladies to challenge reservations by comprehension them as concessions, and the extensive scale investment of school going ladies in the counter Mandal tumult with a specific end goal to claim meet treatment instead of reservations in battles for sexual orientation equality. On the other hand, lower-position male declaration regularly focused on uppercaste ladies, making an uncertain problem for upper-rank women’s activists who had been professional Mandal. The relationship between standing and sexual orientation never appeared to be more cumbersome. The interest for bookings for ladies (and for further reservations for dalit ladies and ladies from the Backward Class and Other Backward Communities) can likewise be seen as an outgrowth of a restored endeavor to address rank and sex issues from inside the landscape of governmental issues. It may likewise demonstrate the inadequacy of concentrating exclusively on sexual orientation in assembling a measurable “arrangement” to the political issue of perceivability and representation.

Rising out of the 33 for each penny bookings for ladies in nearby Panchayat, and plainly inconsistent with the Mandal dissents that compared reservations with ideas of inadequacy, the late requests for reservations is a stamped move far from the verifiable doubt of bookings for ladies. As Mary John has contended, ladies’ powerlessness must be seen with regards to the political removals t h at imprint the emergence of minorities before the state.

The subject of political representation and the plan of gendered defenselessness are associated issues. As I have contended in my exposition incorporated into this volume, such defenselessness is the characteristic of the gendered subject’s peculiarity. It is that type of harmed presence that brings her inside the edge of political readability as various’yet qualified’for general types of review. All things considered, it is basic to political talks of rights and acknowledgment.

Political requests for bookings for ladies’and for lowercaste ladies’supplement academic endeavors to comprehend the profound cleavages between ladies of various positions that contemporary occasions, for example, Mandal or the Hindutva development have uncovered. In investigating the difficulties postured by Mandal to ruling originations of mainstream selfhood, Vivek Dhareshwar indicated conversions between perusing for and recouping the nearness of position as a hushed open talk in contemporary India, and comparable practices by women’s activists who had investigated the unacknowledged weight of gendered personality.

Dhareshwar recommended that scholars of station and scholars of sex may consider elective affinities in their strategies for examination, and deliberately grasp their trashed personalities (position, sexual orientation) with a specific end goal to attract open thoughtfulness regarding them as political characters. Dhareshwar contended this would demonstrate the degree to which secularism had been kept up as another type of upper-rank benefit, the extravagance of overlooking standing, rather than the requests for social equity by dalitbahujans who were requesting an open affirmation of such benefit.

Women and dalit considered the same

Untouchability and Dalit Ladies’ Oppression,” that “It remains a matter of reflection that the individuals who have been effectively required with arranging ladies experience troubles that are no place tended to in a hypothetical writing whose foundational standards are gotten from a sprinkling of standardizing hypotheses of rights, liberal political hypothesis, a not well educated left governmental issues and all the more as of late, every so often, even a well meaning convention of’entitlements.’ Malik in impact requests that how we are comprehend dalit ladies’ defenselessness.

Rank relations are implanted in dalit ladies’ significantly unequal access to assets of essential survival, for example, water and sanitation offices, and in addition to instructive foundations, open spots, and destinations of religious love. Then again, the material impoverishment of dalits and their political disappointment propagate the typical structures of untouchability, which legitimates upper-station sexual access to dalit ladies. Station relations are likewise changing, and new types of viciousness in autonomous India that objective images of dalit freedom such as the defilement of the statues of dalit pioneers, endeavor to counteract dalits’ socio-political progression by dispossessing land, or deny dalits of their political rights are gone for dalits’ apparent social versatility. These fresher types of brutality are regularly supplemented by the sexual harrassment and attack of dalit ladies, indicating the rank and gendered types of helplessness that dalit ladies experience.

As Gabriele Dietrich notes in her exposition “Dalit Movements and Women’s Movements,”* dalit ladies have been focuses of upper-position savagery. In the meantime, dalit ladies have likewise worked as the “property” of dalit men. Lowercaste men are likewise occupied with an unpredictable arrangement of dreams of requital that include the sexual infringement of upper-station ladies in striking back for their weakening by rank society. The risky organization of dalit ladies as sexual property in both occurrences overdetermines dalit ladies’ character in wording exclusively of their sexual accessibility.

Young ladies: Household Servants

At the point when a kid is conceived in most creating nations, companions and relatives shout congrats. A child implies protection. He will acquire his dad’s property and land a position to bolster the family. At the point when a young lady is conceived, the response is altogether different. A few ladies sob when they discover their infant is a young lady on the grounds that, to them, a girl is simply one more cost. Her place is in the home, not in the realm of men. In some parts of India, it’s conventional to welcome a family with an infant young lady by saying, “The worker of your family has been conceived.”

A young lady can’t resist the urge to feel second rate when everything around her advises her that she is worth not exactly a kid. Her character is fashioned when her family and society confine her chances and proclaim her to be inferior.

A blend of amazing neediness and profound inclinations against ladies makes a callous cycle of separation that keeps young ladies in creating nations from satisfying their maximum capacity. It additionally abandons them helpless against extreme physical and psychological mistreatment. These “hirelings of the family” come to acknowledge that life will never be any diverse.

Most prominent Obstacles Affecting Girls

Oppression young ladies and ladies in the creating scene is an overwhelming reality. It results in a huge number of individual tragedies, which signify lost potential for whole nations. Contemplates show there is an immediate connection between a nation’s disposition toward ladies and its encouraging socially and financially. The status of ladies is fundamental to the strength of a general public. On the off chance that one section endures, so does the entirety.

Grievously, female kids are most exposed against the injury of sexual orientation separation. The accompanying impediments are stark case of what young ladies overall face. However, the uplifting news is that new eras of young ladies speak to the most encouraging wellspring of progress for ladies’and men’in the creating scene today.

Endowment

In creating nations, the introduction of a young lady causes awesome change for poor families. At the point when there is scarcely enough nourishment to survive, any tyke puts a strain on a family’s assets. Be that as it may, the financial channel of a little girl feels considerably more serious, particularly in areas where endowment is drilled.

Endowment is merchandise and cash a lady of the hour’s family pays to the spouse’s family. Initially planned to help with marriage costs, share came to be seen as installment to the man of the hour’s family to take on the weight of another lady. In a few nations, endowments are indulgent, costing years of wages, and regularly tossing a lady’s family into obligation. The settlement hone makes the possibility of having a young lady considerably more offensive to poor families. It likewise puts young ladies in threat: another lady is helpless before her in-laws if they choose her settlement is too little. UNICEF gauges that around 5,000 Indian ladies are executed in settlement related occurrences every year.

Disregard

The creating scene is brimming with neediness stricken families who see their girls as a monetary problem. That state of mind has brought about the across the board disregard of child young ladies in Africa, Asia, and South America. In numerous groups, it’s a standard practice to breastfeed young ladies for a shorter time than young men so ladies can attempt to get pregnant again with a kid at the earliest opportunity. Subsequently, young ladies pass up a great opportunity for nurturing nourishment amid an essential window of their advancement, which hinder their development and debilitates their imperviousness to sickness.

Measurements demonstrate that the disregard proceeds as they grow up. Young ladies get less sustenance, medicinal services and less inoculations generally than young men. Very little changes as they get to be ladies. Convention calls for ladies to eat last, regularly decreased to picking over the scraps from the men and young men.

Child murder and Sex-Selective Abortion

In compelling cases, guardians settle on the terrible decision to end their infant young lady’s life. One lady named Lakshmi from Tamil Nadu, a ruined area of India, nourished her child sap from an oleander bramble blended with castor oil until the young lady seeped from the nose and kicked the bucket. “A little girl is dependably liabilities. By what method would I be able to raise a second?” said Lakshmi to disclose why she finished her child’s life. “Rather than her affliction the way I do, I thought it was ideal to dispose of her.”

Sex-specific premature births are much more regular than child murders in India. They are developing always visit as innovation makes it straightforward and shabby to decide an embryo’s sex. In Jaipur, a Western Indian city of 2 million individuals, 3,500 sex-decided premature births are completed each year. The sex proportion crosswise over India has dropped to an unnatural low of 927 females to 1,000 guys because of child murder and sex-based premature births.

China has its own particular long legacy of female child murder. In the most recent two decades, the administration’s notorious one-kid strategy has debilitated the nation’s reputation considerably more. By confining family unit size to restrict the populace, the approach gives guardians only one opportunity to create a desired child before being compelled to pay overwhelming fines for extra youngsters. In 1997, the World Health Organization proclaimed, “‘ more than 50 million ladies were evaluated to miss in China as a result of the standardized slaughtering and disregard of young ladies because of Beijing’s populace control program.” The Chinese government says that sex-specific premature birth is one noteworthy clarification for the amazing number of Chinese young ladies who have just vanished from the populace in the most recent 20 years.

Misuse

Indeed, even after outset, the risk of physical mischief takes after young ladies for the duration of their lives. Ladies in each general public are helpless against misuse. Be that as it may, the danger is more extreme for young ladies and ladies who live in social orders where ladies’ rights mean for all intents and purposes nothing. Moms who do not have their own particular rights have little assurance to offer their girls, a great deal less themselves, from male relatives and other power figures. The recurrence of assault and vicious assaults against ladies in the creating scene is disturbing. Forty-five percent of Ethiopian ladies say that they have been struck in their lifetimes. In 1998, 48 percent of Palestinian ladies confessed to being manhandled by a personal accomplice inside the previous year.

In some societies, the physical and mental injury of assault is aggravated by an extra shame. In societies that keep up strict sexual codes for ladies, if a lady ventures too far out’by picking her own significant other, being a tease in broad daylight, or looking for separation from an injurious accomplice’she has conveyed disrespect to her family and must be restrained. Regularly, teach implies execution. Families submit “honor killings” to rescue their notoriety polluted by defiant ladies.

Shockingly, this “insubordination” incorporates assault. In 1999, a 16-year-old rationally disabled young lady in Pakistan who had been assaulted was brought before her tribe’s legal guidance. Despite the fact that she was the casualty and her aggressor had been captured, the guidance chose she had conveyed disgrace to the tribe and requested her open execution. This case, which got a ton of reputation at the time, is not uncommon. Three ladies succumb to respect killings in Pakistan consistently’including casualties of assault. In zones of Asia, the Middle East, and even Europe, all obligation regarding sexual wrongdoing falls, as a matter of course, to ladies.

Work

For the young ladies who get away from these pitfalls and grow up moderately securely, day by day life is still unfathomably hard. School may be a possibility for a couple of years, however most young ladies are hauled out at age 9 or 10 when they’re sufficiently helpful to work throughout the day at home. Nine million a bigger number of young ladies than young men pass up a major opportunity for school each year, as indicated by UNICEF. While their siblings keep on going to classes or seek after their leisure activities and play, they join the ladies to do the main part of the housework.

Housework in creating nations comprises of persistent, troublesome physical work. A young lady is prone to work from before dawn until the light depletes away. She strolls unshod long separations a few times each day conveying overwhelming pails of water, undoubtedly contaminated, just to keep her family alive. She cleans, grinds corn, accumulates fuel, tends to the fields, washes her more youthful kin, and gets ready suppers until she takes a seat to her own after every one of the men in the family have eaten. Most families can’t manage the cost of current machines, so her undertakings must be finished by hand’squashing corn into dinner with substantial rocks, cleaning clothing against harsh stones, plying bread and cooking gruel over a rankling open flame. There is no time left in the day to figure out how to peruse and compose or to play with companions. She falls depleted every night, prepared to get up the following morning to begin another long workday.

The greater part of this work is performed without acknowledgment or prize. UN measurements demonstrate that despite the fact that ladies create a large portion of the world’s sustenance, they possess just 1 percent of its farmland. In most African and Asian nations, ladies’ work isn’t viewed as genuine work. Should a lady accept an occupation, she is relied upon to keep up every one of her obligations at home notwithstanding her new ones, with no additional assistance. Ladies’ work goes neglected, despite the fact that it is urgent to the survival of every family.

Sex Trafficking

A few families choose it’s more lucrative to send their girls to a close-by town or city to land positions that more often than not include hard work and little pay. That urgent requirement for money leaves young ladies simple prey to sex traffickers, especially in Southeast Asia, where universal tourism pigs out the illicit business. In Thailand, the sex exchange has swelled without register with a primary part of the national economy. Families in little towns along the Chinese fringe are consistently drawn nearer by scouts called “close relatives” who request their girls in return for a long time’s wages. Most Thai agriculturists win just $150 a year. The offer can be excessively enticing, making it impossible to can’t.

essay-2016-06-15-000BHg

Would it be moral to legalise Euthanasia in the UK?: essay help online

The word ‘morality’ seems to be used in both descriptive and normative meanings. More particularly, the term “morality” can be used either (Stanford Encyclopaedia of Philosophy https://plato.stanford.edu/entries/morality-definition

1. descriptively: referring to codes of conduct advocated by a society or a sub-group (e.g. a religion or social group), or adopted by an individual to justify their own beliefs,

or

2. normatively: describing codes of conduct that in specified conditions, should be accepted by all rational members of the group being considered.

Examination of ethical theories applied to Euthanasia

Thomas Aquinas’ natural law considered that morally beneficial actions and the goodness of those actions is assessed against eternal law as a reference point. Eternal law, in his view, is a higher authority and the process of reasoning defines the differences between right and wrong. Natural law thinking is not just concerned with focussed aspects, but considers the whole person and their infinite future. Aquinas would have linked this to God’s predetermined plan for that individual and heaven. The morality of Catholic belief is heavily influenced by natural law. Primary precepts should be considered when considering issues involving euthanasia particularly important key precepts to do good and oppose evil and to preserve life upholding the sanctity of life. Divine law set out in the Bible states that we are created in God’s image and held together by God from our time in the womb. The Catholic Church’s teachings on euthanasia maintain that euthanasia is wrong (Pastoral Constitution, Gaudium et Spes no. 27, 1965) as life is sacred and God-given. (Declaration on Euthanasia 1980). This view can be seen to be just as strongly held and applied today in the very recent case of Alfie Evans where papal intervention in the case was significant and public. Terminating life through euthanasia goes against divine law. Ending life and the possibility of that life bringing love into the world or love coming into the world in response to the person euthanised is wrong. To take a life by euthanasia, according to catholic belief, rejects God’s plan for that individual to live their life. Suicide or intentionally ending life is an equal wrong to murder and as such is to be considered rejection is God’s loving plan (Declaration on euthanasia, 1.3, 1980).

The Catholic Church interprets natural law to mean euthanasia is wrong and that those involved in it are committing a wrongful and sinful act. Whilst the objectives of euthanasia may appear to be good in that they seek to ease suffering and pain they are in fact failing to recognise the greater good of the sanctity of life within God is greater plan and include people other and the person suffering and eternal life in heaven

The conclusions of natural law consider the position of life in general and not just the ending of a single life. An example would be that if euthanasia is lawful older people could become fearful of admission to hospital in case they were drawn into euthanasia. It could also lead to people being attracted to euthanasia at times when they were depressed. This can be seen to attack the principles of living well together in society as good people could be hurt. It also makes some predictions on the slippery slope and floodgates type arguments about hypothetical situations. Euthanasia therefore clearly undermines some primary precepts.

Catholicism accepts the disproportionately onerous treatment is not appropriate towards the end of a person’s life and gives a moral obligation not to strenuously keep a person alive at all costs. An example of this would be the terminally ill cancer patient deciding not to accept further chemotherapy or radiotherapy which could extend their life, but at great cost to quality of that remaining life. Natural law does not seem to prevent them from making these kinds of choices.

There is a doctrine of double effect an example being palliative care with the relief of pain and distress as the objective might have a secondary effect of ending life earlier than if more active treatment options had been pursued. The motivation is not to kill, but rather to ease pain and distress. An example of this is when an individual doctor’s decision to increase opiate drug dosage to the point where respiratory arrest occurs almost inevitably but at all times the intended motivation is the easing of pain and distress. This has on various occasions been upheld as being legally and morally acceptable by the courts and medical watchdogs such as the GMC (General Medical Council).

The catechism of the Catholic Church accepts this and view such decisions as best made by the patient if competent and able and if not by those legally and professionally entitled to act for the individual concerned.

There are other circumstances when the person involved in the process might not be the same type of person as is assumed by natural law. For example, someone with severe brain damage and in a persistent coma or “brain-dead”. In these situations, they may not possess the defining characteristics of a person. This could form justification for euthanasia. The doctors or relatives caring for such a patient may have conflicts of conscience by being unable to show compassion to another and thereby prolong suffering, not only of the patient, but of those surrounding them.

In his book Morals and Medicine published in 1954, Fletcher, the president of the euthanasia Society of America argued that there were no absolute standards of morality in medical treatment and that good ethics demand consideration of patient’s condition and the situation surrounding it.

Fletcher Situation Ethics avoids legalistic consideration of moral decisions. It is anchored only actual situations and specifically in unconditional love for the care of others. When considering euthanasia with this approach it will always “depend upon the situation”.

From the view point of an absolutist, morality is innate from birth. It can be argued that natural law does not change as a result of personal opinions; remaining never changed. Natural law is a positive view with regard to morality as it can be seen to allow people from ranging backgrounds, classes and situations to have sustainable moral laws to follow.

Religious believers also follow the principles of Natural Law as the underlying theology of the law argues the idea that morality remains the same and never changes with an individual’s personal opinions or decisions. Christianity as a religion, has great support amongst its religious believers for there being a natural law of morality. Christian understanding behind this concept has been largely shown to have come as a result of Thomas Aquinas- following his teaching of the close connection of faith and reason being closely related arguments for there being a natural law of morality.

Natural Law has been shown over time to have compelling arguments, one of which being its all-inclusiveness and fixed stature- a contrast to the relative approach to morality. Natural law is objective and is consequently abiding and eternal. It is considered to be within us/innate and is seen to occur as a mixture of faith and reason to go on the form an intelligent and rational being who is faithful in belief of God. Natural law is a part of human nature, commencing from the beginning of our lives when we gain our sense of right and wrong.

However, there are also many disadvantages of natural law with regard to resolving moral problems. They can include, the fact that they are not always self-evident (proving). We are unable to confirm whether there is only one global purpose for humanity. It can be argued that even if humanity had a purpose for its existence, this purpose cannot be seen as self-evident. The perception of natural beings and things is forced to change over generations due to different perceptions, with forms of different times being more fitting with the present culture. It can therefore be argued that absolute morality is changed and altered by cultural beliefs of right and wrong. Some things later on in time being perceived as wrong, leading on to believe that defining what is natural is almost impossible as moral decisions are ever changing. The thought of actuality being better that potentiality, cannot easily transfer to practical ethics. The future holds many potential outcomes, however some of these potential outcomes are ‘wrong’. (Hodder Education, 2016)

Natural law being the best way to resolve moral problems holds a strong argument, however its strict formation means that there is some confusion as to what is right and wrong in certain situations. These views are instead formed by society- not always following the natural law of morality. Darwin’s Theory of Evolution put forward in On The Origin of the Species in 1859, challenged natural law as he put forward the notion that living things strive for survival (survival of the fittest) and supporting his theory of evolution by natural selection. It can be argued that moral problems being solved by natural law may be possible, but not necessarily the best solution.

For many years, euthanasia has been a controversial debate across the globe with different people taking opposing sides and arguing in support of their opinions. Ideally, it is the act of allowing an individual to die in a painless manner by suppressing their medication. Often, these are classified in different forms such as voluntary, involuntary and non-voluntary. However, the legal system has been actively involved in this debate. A major concern put forward is that legalizing any form of euthanasia may lead to slippery slope principle, which holds that permission of anything comparatively harmless today, may begin a trend that results in unacceptable practices. Although one of the popular stands argues voluntary euthanasia is morally acceptable while non-voluntary euthanasia is always wrong, the legal constitution has been split in their decisions in various instances. (Oxford for OCR Religious Studies, 2016)

Voluntary euthanasia is defined by the killing of an individual upon their approval through various ways. The arguments that voluntary euthanasia is morally acceptable are drawn from the expressed desires of a patient. As far as the respect for an individual’s decision does not harm other people, then it is morally correct. Since individuals have the right to make personal choices about their lives, their decisions on how they should die should also be respected. Most, importantly, at times, it remains the only option of assuring the well-being of the patient especially if they are suffering incessant and severe pain. Despite these claims, several cases have emerged, but the court has continued to refuse to uphold the morality of euthanasia irrespective of a victim’s consent. One of these is the case of Diane Pretty who suffered from motor neuron disease. Since she was afraid of dying by choking/aspiration, a common end of life event experienced by many motor neurone disease victims. She sought to have legal assurance that her husband would be free from the threat of prosecution if he assisted her to end her life. Her case went through the Court of Appeal, The House of Lords (the Supreme Court in today’s system) and the European Court of Human Rights. However, due to the concerns raised under the slippery slope principle, the judges denied her request, and she lost the case.

There have been many legal and legislative battles attempting to change the law to support voluntary Euthanasia in varying circumstances. Between 2002 and 2006 Lord Joel Joffe (a Patron of the Dignity in Dying organisation) fought to change the law in the UK to support assisted dying. His first Assisted Dying (Patient) Bill continued to the stage of a second reading (June 2003) however surpassed the time limit to progress to the committee stage. However, Joffe persisted and in 2004 restated his plight with the Assisted Dying for the Terminally Ill Bill which progressed further to the earlier bill to make it to the committee stage in 2006. The committee stated: “In the event that another bill of this nature should be introduced into Parliament, it should, following a formal Second Reading, be sent to a committee of the whole House for examination”. However, unfortunately in May 2006 an amendment at the Second reading lead to the collapse of the bill. This was a surprise to Joffe, with the majority of the select committee on board with the bill. In addition to this calls for a statute supporting voluntary euthanasia have increased and this can be evidenced by the significant numbers of people in recent years travelling to Switzerland where physician assisted suicide is legal under permitted circumstances. Lord Joffe expressed these thoughts in an article written for the campaign for Dignity In Dying cause in 2014 shortly before his death in 2017 in support of Lord Falconer’s Assisted Dying Bill which was a Bill which proposed to permit the “terminally ill, mentally competent adults to have an assisted death after being approved by doctors” (Falconer’s Assisted Dying Bill, Dignity in Dying, 2014). The journey of this bill was followed by the following referenced documentary.

The BBC documentary ‘How to Die: Simon’s Choice’ followed the decline of Simon Binner from motor neurone disease and his subsequent plight for an assisted death. The documentary followed his journey to Switzerland for a legal assisted death and documented the reactions of his surrounding family. During filming of the documentary, a legal bill was being debated in parliament proposing to legalise assisted dying in the United Kingdom. The bill proposed a new law (The Lord Falconers Assisted Dying Bill) which would allow a person to request a lethal injection if they had less that six months left to live, this raised a myriad of issues including precisely defining a life term whereby one has more or less that six months left to live. The Archbishop of Canterbury, Justin Welby urged MP’s to reject the bill stating that Britain would be crossing a ‘legal and ethical Rubicon’ if parliament were to vote to allow the terminally ill to actively be assisted to die at home in the UK under medical supervision. The leaders of the British Jewish, Muslim, Sikh and Christian religious communities wrote a joint open letter to all members of the British parliament urging them to oppose the bill to legalise assisted dying. (The Guardian, 2015). After announcing his death on LinkedIn, Simon Binner died at an assisted dying clinic in Switzerland. The passing of this bill may have been the only way of helping Simon Binner in his home country, although assisted dying was ruled to be unlawful. (Deacon, 2016)

The result of the private members bill, originally proposed by Rob Marris (a Labour MP from Wolverhampton) ended in defeat in 330 MPs against and 118 MPs in favour. (The Financial Times, 2015)

The 1961 Suicide Act (Legislation, 1961) decriminalised suicide, however it didn’t make it morally licit. It outlines that a person who aids, abets, counsels or procures suicide of another/attempt by another to commit suicide shall be liable to be sentenced to a prison term of up to 14 years. It also provided for the situation of a defendant on trial on indictment for murder/manslaughter it is proved that the accused aided, abetted, counselled or procured the suicide of the person in question, the jury could find them guilty of that offence as an alternative verdict.

Many took that the view that the law supports principle of autonomy, but the act was used to reinforce the sanctity of life principle by criminalising any form of assisted suicide. Although the act doesn’t hold the position that all life is equally valuable, there have been cases when allowing a person to die would be the better solution.

In the case of non-voluntary euthanasia, patients are often incapable of giving their approval for death to be induced. It mostly occurs if a patient is either very young, mentally retarded, has an extreme brain damage, or is in a coma. Opponents argue that human life should be respected and in this case, it is even worse because the victim’s wishes are not factored when making decisions to end their life. As a result, it becomes morally wrong irrespective of the conditions that they face. In such a case, all parties involved should wait for a natural death while at the same time according the patient the best palliative medical attention possible. The case of Terri Schiavo who was suffering from bulimia and with an extremely damaged brain falls under this argument. The ruling of the court allowing the request of her husband to have her life terminated triggered heated debates with some arguing that it was wrong while others saw it as a relief since she had spent more than half of her life unresponsive.

I completed primary research in order to support my findings as to whether it would be moral or not to legalise Euthanasia in the UK. With regard to the having an understanding of the correct definition of Euthanasia nine out of ten people who took part in the questionnaire selected the correct definition of physician-assisted suicide being “The voluntary termination of one’s life by administration of a lethal substance with the direct or indirect assistance of a physician” (Medicanet, 2017). The one person who selected the wrong definition believed it to be “The involuntary termination of one’s own life by administration of a lethal substance with the direct or indirect assistance of a physician. The third definition on the questionnaire stated that physician assisted suicide was “The voluntary termination of one’s own life by committing suicide without the help of others”- this definition is the ‘obvious’ incorrect answer and no participant in the questionnaire selected this answer.

The morality of the young should be followed. From the results of my primary research completed by a selected youth audience seventy percent were in agreement that people should have the right to choose when they die. However only twenty percent of this targeted audience were in agreement that they would assist a friend or family member in helping them die. This drop in support can be supported by the fear that prosecution brings of a possible fourteen year imprisonment for assisting in a person’s death.

The effect of the Debbie Purdy case (2009), was that guidelines were established by the Director of Public Prosecutions in England and Wales (Dying or assisted dying isn’t illegal in Scotland however there is no legal way to medically access it). These guidelines were established according to the Director of Public Prosecutions to “clarify what his position is as to the factors that he regards as relevant for and against prosecution” (DID Prosecution Policy, 2010). The guidance policy outlines ‘more likely’ factors as to when prosecution should take place; for prosecution of an assistor the policy outlined that if they had a history of violent behaviour, didn’t know the person, received a financial gain from the act or acted as a medical professional then they were more likely to face prosecution. However despite these factors the policy stated that police and prosecutors of the case should examine any financial gain with a ‘common sense’ approach as many financially benefit from the loss of a loved one, however the fact that they were a close relative being relieved of pain for example should be a larger factor behind assisting someone to die, to be considered in case of prosecution.

Arguments that state voluntary euthanasia is morally right while involuntary euthanasia is wrong, remains as being one of the most controversial issues even in the modern society. It is even more significant because even the legal systems remain split in their ruling in the various cases such as those cited. Based on the slippery slope argument, care should be taken when determining what is morally right and wrong because of the sanctity of human life. Many consider that the law has led to considerable confusion and that one way of developing the present situation is to create a new Act which permitting physician assisted dying, with the proposal stating that there should be a bill to “enable a competent adult who is suffering unbearably as a result of a terminal illness to receive medical assistance to die at his own considered/persistent request… to make provision for a person suffering from a terminal illness to receive pain relief medication” (Assisted Dying for the Terminally ill Bill, 2004).

There is a major moral objection to voluntary euthanasia under the reasoning of the “slippery slope” argument: the fear that what begins as legitimate reasons to assist in a person’s death will also permit death in other illegal circumstances.

In a Letter addressed to The Times newspaper (24/8/04), John Haldane and Alasdair MacIntyre along with other academics, lawyers and philosophers, suggested that any supporters of the Bill change from making the condition one of actual unbearable suffering from terminal illness to merely the fear, discomfort and loss of dignity which terminal illness might bring. In addition, there is an issue of if quality of life is grounds for euthanasia from those who request it therefore it must be open to those who don’t request it or are unable to request it therefore presenting the issue of a slippery slope. Also in the letter addressed to The Times, the esteemed academics referenced Euthanasia in the Netherlands where it is legal. The purpose of this was to infer that many people have dies against their desire due to safeguarding issues. (Hodder Education, 2016)

In conclusion, upon considering different morality arguments on both sides of the debate, I concluded that the two forms of morality (Natural Law and Situation Ethics) would give two opposing responses in relation to the question.

From the viewpoint of a deontologist who would be guided by natural law, duty and obligation arguably from a religion would lead a society to decide that it would be wrong to legalise Euthanasia. However, from the viewpoint of a situational ethicist whose viewpoint would be changeable depending on the independent situation could support the plight to legalise Voluntary Euthanasia in the UK under guidelines to account for differing situations.

After completing my Primary and Secondary Research, considering the passage of many unsuccessful bills put through parliament to legalise euthanasia and many case studies including the moving account of Simon Binner’s fight to die, my own view rests on the side of a situational ethicist who would believe that depending to the independent situation people should be able to have the right to die in their own country by legalising voluntary euthanasia, rather than being forced to travel abroad to access a legal form of voluntary euthanasia and risk their loved ones being prosecuted on their return to the UK for assisting them.

The slippery slope argument does not help those in particular individual situations and it must surely be wrong to shy away from making difficult decisions on the grounds that an individual should sustain prolonged suffering in order to protect society from the possible extended over use of any legalisation. In practice over the past half century some sort of euthanasia has been going on in the UK when doctors give obvious over-dosage of opiates in terminal cases, but have been shielded from the legal consequences by an almost fictional notion that as long as the motivation was to ease and control pain then the inevitable consequence of respiratory arrest (respiratory suppression is a side effect of morphine type drugs), then the action was lawful.

Discredited and now defunct Liverpool Care Pathway for the Dying Patient (LCP) was an administrative tool used as an attempt to assist UK healthcare professionals to manage the care pathway and deciding palliative care options for patients at the very end of life. As with many such tick the-box-exercises individual discretion is restricted in an attempt to standardise practice nationally (Wales was excluded from the LPA). The biggest problem with the LPA (which attracted much adverse media attention and public concern in 2012) was that most patients or their families were not consulted when they were placed on the pathway. It had options for withdrawing active treatment whilst managing distressing symptoms actively. However, removing intravenous hydration/feeding by regarding it as active treatment would inevitably lead to death in a relatively short period of time making the decision to place a patient on the LPA because they were at the end of life a self-fulfilling prophesy. (Liverpool Care Pathway)

There is a chilling consideration of cost of provision of “just in case” boxes at approximately £25 in the last part of this lengthy document should be part of the process of considering what to advise professionals may seem alarming to some. However there is a moral factor in the financial implications of unnecessarily prolonging human life. Should the greater good be considered when deciding to actively permit formal pathways to euthanasia or to take steps to prohibit it (the crimes of murder or assisting suicide). In the recent highly publicised case of Alfie Evans enormous financial resources were used to keep a child with a terminal degenerative neurological disease alive on a paediatric intensive care unit at Alder Hay hospital in Liverpool for around a year. In deciding to do this it is inevitable that those resources were unavailable to treat others who might have gone on to survive and live a life. Huge sums of money were spent both on medical resources and lawyers. The case became a highly media publicised circus resulting in ugly threats made against medical staff at the hospital concerned. There was international intervention in the case by the Vatican and Italy (granting of Italian nationality to the child). Whist the emotional turmoil of the parents was tragic and the case very sad was it moral that their own beliefs and lack of understanding of the medical issues involved should lead to such a diversion of resources and such terrible effects on those caring for the boy?

(NICE (National Institute of Clinical Excellence) guidelines, 2015)

The General Medical Council (GMC) governs the licensing and professional conduct of doctors in the UK. They have produced guidance for doctors regarding the medical role at the end of life Treatment and care towards the end of life: good practice in decision making. It gives comprehensive advice on some of the fundamental issues dealing with the end of life treatment and it covers issues such as living wills (where withdrawal of treatment requests can be set out in writing and in advance). These are binding both professionally, but as ever there are some caveats regarding withdrawal of life prolonging treatment.

It also sets out presumptions of a duty to prolong life and of a patient’s capacity to make decisions along established legal and ethical viewpoints. I particular it is stated that “decisions concerning life prolonging treatments must not be motivated by a desire to bring about a patient’s death” (Good Medical Practice, GMC Guidance to Doctors, 2014)

Formally the Hippocratic Oath was sworn by all doctors and set out a sound basis for moral decision making and professional conduct. In modern translation from the original ancient Greek it states with regard to medical treatment that a doctor should never treat “….. with a view to injury and wrong-doing. Neither will [a doctor] administer a poison to anybody when asked to do so, nor will [a doctor] suggest such a course. Doctors in the UK do not swear the oath today, but most of its principles are internationally accepted except perhaps in the controversial areas surrounding abortion and end of life care.

(Hippocratic Oath, Medicanet)

In conclusion, upon considering different morality arguments on both sides of the debate, I concluded that the two forms of morality (Natural Law and Situation Ethics) would give two opposing responses in relation to the question.

From the viewpoint of a deontologist who would be guided by natural law, duty and obligation arguably from a religion would lead a society to decide that it would be wrong to legalise Euthanasia. However, from the viewpoint of a situational ethicist whose viewpoint would be changeable depending on the independent situation could support the plight to legalise Voluntary Euthanasia in the UK under guidelines to account for differing situations.

After completing my Primary and Secondary Research, considering the passage of many unsuccessful bills put through parliament to legalise euthanasia and many case studies including the moving account of Simon Binner’s fight to die, my own view rests on the side of a situational ethicist who would believe that depending to the independent situation people should be able to have the right to die in their own country by legalising voluntary euthanasia, rather than being forced to travel abroad to access a legal form of voluntary euthanasia and risk their loved ones being prosecuted on their return to the UK for assisting them.

At the end of the day, much of the management of the end of life of patients is not determined by the stipulations laid out by committees in lengthy documents, but by the individual treatment decisions made by individual doctors and nurses who are almost always acting in the best interests of patients and their families. The methodology of accelerating the inevitable event by medication or withdrawal of treatment is almost impossible to standardise across a hospital or local community care setup, let alone a country. It may be a better way to continue the practice of centuries and let the morality and conscience of the treating professions determine what happens and keep the formal moral, religious and legal factors involved in such areas in the shadows.

2018-5-4-1525394652

Has the cost of R & D impacted vaccine development for Covid-19?

Introduction

This report will be investigating and trying to answer the question of: ‘To what extent have the cost requirements of R&D, structure of the industry and government subsidy affected firms in the pharmaceutical industry in developing vaccines for Covid-19?’. The past two years have been very unpredictable for the pharmaceutical industry regarding the breakout of the COVID-19 pandemic. Despite the fact that the pharmaceutical industry has made major contributions to human wellbeing with regards to the reduction of suffering and ill health for over a century, the industry still remains one of the least trusted industries based on public opinions. It is even often compared to the nuclear industry in terms of trustworthiness. Despite being one of the riskiest industries to invest money into, governments have subsidised billions into the production of the COVID-19 vaccines. Regardless of the fact of the associated risks that come with pharmaceuticals, a big part of the public still thinks pharmaceuticals should continue to be produced and developed in order to provide the correct treatment to those with existing health issues (Taylor, 2015). This along with further aspects affecting the requirements of R&D, structure of the industry and government subsidy and how these have affected firms in the pharmaceutical industry with regards to the development of the COVID-19 vaccines will be discussed further in the report.

The Costs of R&D

Back in 2019, $83 billion was spent on R&D. That figure alone is roughly 10 times greater than what the industry spent on R&D in the 1980s. Most of this amount was dedicated to testing and discovering new drugs and clinical testing with regards to safety of the drug. In 2019 drug companies dedicated a quarter of their annual income to R&D which is also an increase of almost double since the early 2000s.

(Pharmaceutical R&D Expenditure Shows Significant Growth, 2019)

Usually the amount spent on R&D of a new drug by drug companies is based on the financial return they expect to make, any policies influencing the supply and demand for drugs and the cost of developing these drugs.

Most drugs that have been approved recently have been specialty drugs. These are drugs that typically treat issues such as complex, chronic or rare conditions and can require patient monitoring. However, specialty drugs are very expensive to develop, pricey for the customer and hard to remake (Research and Development in the Pharmaceutical Industry, 2021).

Government subsidies for the COVID-19 vaccines

There are two main ways in which a federal government can have a direct impact in supporting vaccine development. This is either done by making a promise to purchase a successful vaccine in advance once the firm has successfully achieved its specified goal with the vaccine, or they can cover any costs associated with the R&D of the vaccine.

(Which Companies Received The Most Covid-19 Vaccine R&D Funding?, 2021)

The Department of Health and Human Services in the month of May 2020, launched ‘Operation Warp Speed’. This was a collaborative project in which the FDA, the Department of Defence, the National Institutes of Health and the Centre for Disease Control and Prevention all worked together to provide funding for the COVID-19 vaccine development. Through ‘Operation Warp Speed’, more than $19 billion was provided in funding by the federal government to help seven different private pharmaceutical manufacturers in the development and research of COVID-19 vaccines. A further five out of seven of those went on to accept further funding in order to help these companies boost the production capabilities of the vaccines. Later a sixth company accepted funding in order to help boost the production of another company’s vaccines as they received authorization for emergency use. Then six of the seven also made a deal for an advance purchase. Two of these companies received additional funding as they sold more doses than they expected to during the advance purchase agreements, in order for them to develop even more vaccines to distribute. Due to the simultaneous execution of combining numerous stages of development that in normal cases would be developed in consecutive order, it allowed pharmaceutical manufacturers to reach their end goal and manufacture vaccines at a rate a lot higher than normal when it comes to vaccines. This was done due to the urgency of a solution to the COVID-19 pandemic, as it was starting to cause public uproar and panic amongst nations. As soon as the first COVID-19 diagnoses was made in the US, two vaccines were already at Phase III clinical trials, and this is immensely quick, as it would usually take around a few years of research in order to reach Phase III in clinical trials for a vaccine. The World Health Organisation claims that there were already over 200 COVID-19 vaccine development candidates in the time period of February 2021 (Research and Development in the Pharmaceutical Industry, 2021).

(Research and Development in the Pharmaceutical Industry, 2021)

The image above shows what vaccines were at which stage of development during what time period. This shows the urgency that was there in order to develop and produce these vaccines to fight the outbreak of the coronavirus. Without these government subsidies, firms would have been nowhere near completing the research and development needed in order to produce numerous COVID-19 vaccines. This shows the importance that government subsidies have on the pharmaceutical industry and the development of new drugs and vaccines.

Impact of the structure of the pharmaceutical industry on vaccine development

When it came to the development of the COVID-19 vaccines, many different names in the pharmaceutical industry took part. Now as far as the majority of society is concerned, the pharmaceutical industry is just a small group of large multinational corporations such as GlaxoSmithKline, Novartis, AstraZeneca, Pfizer and Roche. These are frowned upon by the public as they are stereotyped to be the ‘Big Pharma’ and so they can be misleading. Many people have their if’s and doubts about these big multinational corporations especially when they have such an influence on their health and the drugs they develop. It becomes hard for the public to rely and trust these companies because at the end of the day it is their health that they are trusting these companies with. So therefore it is logical that a lot of people will have had and still do have their suspicions about the COVID-19 vaccines developed by a handful of these companies. If you were to ask someone whether or not they have ever heard of companies like Mylan or Teva, they would probably have no clue about them even though Teva is the world’s 11th biggest pharmaceutical company and probably produces the medicine that these people take on a regular basis. The fact that over 90% of pharmaceutical companies are basically almost invisible to the general public obviously means that when it does become known to the public who has manufactured a medicine they are considering taking, for example the Pfizer vaccine, people are going to be careful and suspicious about taking this vaccine as they have probably never heard of the company Pfizer before. All this, despite it being that these companies are responsible for producing a majority of the medicines that everyone takes.

Most new drugs that are produced never even make it onto the market as the drug is found to not work or to have serious side effects, making it unethical to use on patients. However, the small percentage of drugs that do make it onto the market are patented, meaning that the original manufacturer only holds temporary rights to sell the product. Once this has expired, the pharmaceutical is free to sell and manufacture by anyone, meaning it is now a generic pharmaceutical (Taylor, 2015).

This again does not help research pharmaceutical companies, as their developments which are now out of patent, are just being sold by generic pharmaceutical companies where everyone goes to buy their pharmaceuticals. This means generic pharmaceutical companies basically never have a failed product and the research companies are barely able to create a successful product to make it out onto the market. This again causes the public to not even know that the majority of drugs they buy come from these research companies and are not originally procured by the generic pharmaceutical company they buy them from.

As seen with the COVID-19 vaccines, this caused a lot of uncertainty and distress amongst the public as most people had never even heard of companies like ‘Pfizer’ or ‘AstraZeneca’. This in turn made it more difficult for pharmaceutical companies to successfully manufacture and sell their vaccine, prolonging the whole vaccination process.

Due to this structure of the pharmaceutical industry, it has affected firms greatly in their ability to successfully and reliably manufacture vaccines against COVID-19.

Conclusion

Looking at the three factors combined: cost requirements of R&D, structure of the industry and government subsidy, it is clear that these have all had a great impact in the development of the COVID-19 vaccines. The costs associated with R&D in the development of the COVID-19 vaccines, essentially determines how successful the vaccines would be and whether or not they would have enough to first of all do the needed research and then to finally produce and sell them. Without the large number of costs that go into the development of vaccines and other drugs, the COVID-19 vaccines will have never been able to be manufactured and sold. This will have left the world in even more panic and uproar than it was/is. If this would’ve happened, it can easily have a ripple effect on economies, social factors and maybe even potentially other factors such as environmental factors.

One of the biggest impacts on the successful manufacturing and sale of the vaccines was to do with the structure of the industry. With big research pharmaceutical companies putting in all the work and effort to develop these COVID-19 vaccines but with most of the general public not ever even having heard of them before, it made it very hard for pharmaceutical companies to come across as reliable. People didn’t trust the vaccines as they had never heard of the company who developed it, such as Pfizer. This caused debate and protest against these vaccines, making it harder for companies to produce and successfully sell their vaccines to the public who were in need of them and demanded them. This was due to one major flaw in the pharmaceutical industry, which is the fact that companies such as Pfizer and AstraZeneca are kept under the rug and are barely even known by the public as all their products are just taken and sold on by generic pharmaceutical companies where people can buy them from. It also has to do with the fact that research pharmaceutical companies specialise in advanced drugs and not in more generic drugs which are more likely to be successful as they are easier to develop. So naturally the lack of successful products produced will reflect negatively on these companies although the one product they do successfully produce will also be frowned upon due to its previously non viable products.

Then finally, probably the second or joint most important factor is government subsidies. It is quite clear that without the correct government funding and without ‘Operation Warp Speed’ we’d still be in the process of trying to develop even the first COVID-19 vaccine as there will have been nowhere near enough funding for the R&D of the vaccines. This would’ve resulted in the death rate of coronavirus infections to spike, and will have probably put the economy on a complete standstill putting a large number of people out of work. All of this has numerous ripple effects, as just the one issue of loss of work could spike the poverty rate immensely, leaving economies broken. So overall, these three factors have had a huge impact on firms in the pharmaceutical industry in developing the COVID-19 vaccines.

2022-1-5-1641412725

Gender in Design: essay help free

Gender has always had a dominant place in design. Kirkham and Attfield in their 1996 book, The Gendered Object, set out that in their view that there are attributable genders which seem to be unconsciously attached to some objects as the norm. Making the distinction between how gender is viewed in modern day design compared to twenty plus years ago is now radically different in that there is now recognition of this normalization. Having international companies recognise this change and adapt their brands and companies to relate to this modern day approach influences designers like myself to keep up to date and affect my own work.

When designing there is Gender system some people tend to follow very strictly, the system is a guide that works with values that reveals the gender formation in mankind. In the gender system you have binary opposition which takes action in colour, size, feeling and shape, for example pink/blue, small/large, smooth/rough and organic/geometric. Without even thinking the words give off synonyms of male or female without even putting them in context. Gender’s definition is traditionally Male or Female but modern day brands are challenging and pushing these established boundaries. They don’t think they should be restrictive or prescriptive as they have been in the past. Kirkham and Attfield challenge this by comparing perceptions in the early twentieth century illustrating that the societal norms were the opposite to what we are now made to believe by gender norms. A good example of this is the crude binary opposition implicit in ‘pink for a little girl and blue for a boy’ was only established in the 1930’s; babies and parents managed perfectly well without such colour coding before then. Today through marketing and product targeting these ‘definitions’ are even more widely used in the design and marketing of children clothes and objects than a few years ago. Importantly, such binary oppositions also influence those who purchase objects, and, in this case, facilitate the pleasures of many adults take in seeing small humans visibly marked as gendered beings. This is now being further challenged by the demands for non-binary identification.

This initial point made by Kirkham and Attfield in 1996 is still valid. Even though the designers and brands are in essence guilty of forms of discrimination by falling in line with using the established gender norms, they do it because it’s what their consumers want and how they see development of business and creation of profit, because these stereotypical ‘Norms’ are seen to be Normal, acceptable and sub-consciously recognisable. “Thus we sometimes fail to appreciate the effects that particular notions of femininity and masculinity have on the conception, design, advertising, purchase, giving and uses of objects, as well as on their critical and popular reception”. (Kirkham and Attfield. 1996. The Gendered Object, p. 1).

With the help of the product language, gendered toys and clothes appear from an early age. The products are sorted as being ‘for girls’ and ‘for boys’ in the store as identified by Ehrnberger, Rasanen, Ilstedt, in 2012 in the article ‘Visualising Gender Norms in Design. International Journal of Design’. Product language is mostly used in the branding aspect of design, how a product or object is portrayed, it’s not only what the written language says. Product language relates to how the object is being showcased and portrayed through colours, shapes and patterns. A modern example of this is the branding for a Yorkie chocolate bar. Their slogan was publicly known as being gender bias towards mens. ‘Not for girls’, there is no hiding the fact that the language the company are using is being targeted at men because they are promoting a brand that is strong, chunky and ‘hard’ in an unsophisticated way which all have connotations of being ‘male’ and actually arguably as ‘alpha male’ to make it more attractive to men. Their chosen colours also suggest this with using navy blue, dark purple, yellow and red which are bold and is typically a ‘male’ generated pallette. Another example would be the advertisement of tissues. Tissues no matter where you buy them do the exact same thing irrespective of gender so why are some tissues being targeted at woman and some at men, could it be that this gender targeting be avoiding neutrality helps sell more tissues.

Product Language is very gender specific when it comes to clothing brands and toys for kids. “Girls should wear princess dresses, play with dolls and toy housework products, while boys should wear dark clothes with prints of skulls or dinosaurs, and should play with war toys and construction kits”. (Ehrnberger, Rasanen, Ilstedt, 2012. Visualising Gender Norms in Design. International Journal of Design). When branding things for children having the separation between girl and boy is extremely common, using language like ‘action’ which has male connotations or ‘princess’ which has female connotations appeals to the consumer because they are relatable words to them and to their children as well. In modern society most people find it difficult not to identify blue for boys and pink for girls especially from newborns. If you were to walk into any department store/ toy store or any store that caters to children you will see the separation between genders no matter if it is clothes to toys or anything in between. The separation is so obvious through the colour branding used. Girl side, pink, yellow, lilac are used, soft bright happy colours being used on toy babies and dolls to hats and scarfs. Conversely on the boys side blue, green and black, bold, dark, more primary colours being used for trucks to a pair of trousers.

Some companies have begun to notice how detrimental the separation is developing into and how it could possibly create a hold in advancing and opening up our society, example being John Lewis Partnership.

John Lewis is a massive department store, that has been in business for nearly fifty years. In 2017 they decided to scrap the girls section and boys sections for the clothing range in their store, and name it ‘Childs wear’ a gender neutral name. Allowing them to design clothing that allows children to wear whatever they want without being told ‘no, that is a boys top you can’t wear that because you’re a girl’ or vice versa. Caroline Bettis, head of children’s wear at John Lewis, said: “We do not want to reinforce gender stereotypes within our John Lewis collections and instead want to provide greater choice and variety to our customers, so that the parent or child can choose what they would like to wear”. Possibly the only issue with this stance is the price point, John Lewis is typically known for being a higher priced, high street store which means it isn’t accessible for everyone to shop there. Campaign group Let Clothes be Clothes commented on this “Higher-end, independent clothing retailers have been more pro-active at creating gender-neutral collections, but we hope unisex ranges will filter down to all price points. We still see many of the supermarkets, for example, using stereotypical slogans on their clothing,” (http://www.telegraph.co.uk/news/2017/09/02/john-lewis-removes-boys-girls-labels-childrens-clothes/).

Having a very well-known brand make this move should only enforce, encourage and inspire others to join in with the development. This change is a bold way of using Product language, even though it’s not for just one specific thing its advertising and marketing as well, meaning it is a whole rebrand of company, by not using gender specific words it takes away the automatic stereotypes you get when buying anything for children.

Equality is the state of being equal, be it in status, rights or opportunities, so when it comes to design why does this attribute get forgotten about. This isn’t a feminist rant, gender equality is affected in both male and females in the design world, when designing, everything should be equal and fair to both sexes. “Gender equality and equity in design is often highlighted, but it often results in producing designs that highlight the differences between men and women, although both the needs and characteristics vary more between individuals than between genders” (Hyde 2005). Hyde’s point is still contemporary and relevant, having gender equality in design is very important, but gender isn’t the sole issue, things can be designed for a specific gender but even if you are female you might not relate to the gender specific clothes for your sex. Design is to make and create something for someone or thing, not just gender. “Post- feminism argues that in an increasingly fragmented and diverse world, defining one’s identity as male or female is irrelevant, and can be detrimental”. (https://www.cl.cam.ac.uk/events/experiencingcriticaltheory/Satchell-WomenArePeople.pdf).

Recently many more up and coming independent brands and companies have been launching Unisex clothing brands for a multiple of years, most have been doing it and pushing the movement well before the topic of gender equality in design got into mainstream media as an issue. One company pushing out gender norms is Toogood London and another is GFW, Gender Free World. Gender Free World is a company that was created by a group of people who all think on the same wavelength when it comes to equality in gender. In fact their ‘Mission Statement’ sets this out as a core ethos (which incidently is obviously an influence on John Lewis when you look at the transferability of the phraseology) “GFW Clothing was founded in 2015 (part of Gender Free World Ltd) by a consortium of like-minded individuals who passionately believe that what we have in our pants has disproportionately restricted the access to choice of clothing on the high street and online.” https://www.genderfreeworld.com/pages/about-g. Lisa Honan is the cofounder of GFW, her main reason for starting a company like this was through ‘sheer frustration’ due to the lack of options for her taste and style on the market, with this she has shopped in male and female departments but never found anything fitted either especially if she was going for a male piece of clothing. During an interview with Honan by Saner she commented that the men’s shirts didn’t fit her because she had a woman’s body and iIt got her thinking, ‘ why is there a man’s aisle and a woman’s aisle, and why do you have to make that choice?’. She saw that you’re not able to make many purchases without being forced to define your own gender and this is reinforcing the separation between genders in fashion, if she feels this way many others must too, and they do or there wouldn’t be such a potential big business opportunity for it.

In my design practice of Communication Design, gender plays a huge role. Be it from colour choices, to certain typefaces being used, most work Communication designers need to create and produce, will either be to represent a brand or to actually brand a company, so when choosing options, potential gender stereotyping should come into consideration. The points mentioned above, showing how using the gender system, product language, gender norms and having equality and equity in design, reinforces graphic designers in a cautionary manner not to not fall down any pit holes when designing.

Designing doesn’t mean simply male or female, designing means to create and produce ‘something’ for ‘someone’ no matter their identifiable or chosen gender. If they are a company producing products targeted specifically at men and after a robust design concept examination I felt that using blue would enhance their brand and awareness to their target demographic then blue would be used, in just the same way using pink for them if it works for the customer, then put simply it works.

To conclude, exploring the key points of gender in the design world, only showcases the many issues there are.

2017-12-11-1513023430

The stigma surrounding mental illness: essay help free

Mental illness is defined as a health problem resulting from complex interactions between an individual’s mind, body and environment which can significantly affect their behavior, actions and thought processes. A variety of mental illnesses exist, impacting the body and mind differently, whilst affecting the individual’s mental, social and physical wellbeing to varying degrees. A range of psychological treatments have been developed in order to assist people living with mental illness, however social stigma can prevent individuals from successfully engaging with these treatments. Social or public stigma is characterized by discriminatory behavior and prejudicial attitudes towards people with mental health problems resulting from the psychiatric label they possess (Link, Cullen, Struening & Shrout, 1989). The stigma surrounding labelling oneself with a mental illness causes individuals to hesitate in regards to seeking help as well as resistance to treatment options. Stigma and its effects can vary depending on demographic factors including age, gender, occupation and community. There are many strategies in place to attempt to reduce stigma levels which focus on educating people and changing their attitudes towards mental health.

Prejudice, discrimination and ignorance surrounding mental illnesses results in a public stigma which has a variety of negative social effects towards individuals with mental health problems (Thornicroft et al 2007). An understanding of how stigma can be gained through the Attribution Model which identifies four steps involved in the formation of a stigma (Link & Phelan, 2001). The first step in the formation of a stigma is ‘labelling’, whereby key traits are recognized as portraying a significant difference. The next step is ‘stereotyping’ whereby these differences are defined as undesirable characteristics followed by ‘Separating’ which makes a distinction between ‘normal’ people versus the stereotyped group. Stereotypes surrounding mental illnesses have been developing for centuries, with early beliefs being that individuals suffering from mental health problems were possessed by demons or spirits. ‘Explanations’ such as these, promoted discrimination within the community, preventing individuals from admitting any mental health problems due to a fear of retribution (Swanson, Holzer, Ganju & Jono, 1990). The final step in the Attribution model described by Link and Phelan is ‘Status Loss’ which leads to the devaluing and rejection of individuals in the labelled group (Link & Phelan, 2001). An individual’s desire to avoid the implications of public stigma causes them to avoid or drop out of treatment for fear of being associated with negative stereotypes (Corrigan, Druss and Perlick, 2001). One of the main stereotypes surrounding mental illness, especially depression, and Post Traumatic Stress Disorder is that people with these illnesses are dangerous and unpredictable (Wang & Lai, 2008). Wang and Lai carried out a survey whereby 45% of participants considered people with depression as dangerous, however these results maybe subject to some reporting bias, yet a general inference can be made. Another survey found that a large proportion of people also confirmed that they were less likely to employ someone with mental health problems (Reavley & Jorm, 2011). This study highlights how public stigma can affect employment opportunities, consequently creating a greater barrier for anyone who would benefit from seeking treatment.

Certain types of stigma are unique and consequently more severe to certain groups within society. Approximately 22 soldiers or veterans commit suicide every day in the United States due to Post Traumatic Stress Disorder (PTSD) and depression. A study was performed surveying soldiers and found that out of all the people who met the criteria for a mental illness, only 38% would be interested in receiving help and only 23-30% actually ended up receiving professional help (Hoge et al, 2004). There is an enormous stigma surrounding mental illness within the military, due to their high values in mental fortitude, strength, endurance and self sufficiency (Staff, 2004). A soldier who admits to having mental health problems is deemed as not adhering to these values thus appearing weak or dependent, therefore placing a greater pressure on the individual to deny or hide any mental illness. Another contributor to soldiers avoiding treatment is a fear of social exclusion as it is common in military culture for some personnel to socially distance themselves from soldiers with mental health problems (Britt et al, 2007). This exclusion is due to the stereotype that mental health problems make a soldier unreliable, dangerous and unstable. Surprisingly, individuals with mental health problems who seek treatment are deemed more emotionally unstable than those who do not, thus the stigma surrounding therapy creates a barrier for individuals to start or continue their treatment (Porath, 2002). Furthermore, soldiers are also faced with the fear that seeking treatment will negatively affect their career, both in and out of the military, with 46 percent of employers considering PTSD as an obstacle when hiring veterans in a 2010 survey (Ousley, 2012). The stigma associated with mental illness in the military is extremely detrimental to the soldiers’ wellbeing as it prevents them from seeking or successfully engaging in the treatment for mental illnesses which have tragic consequences.

Adolescents and young adults with mental illness have the lowest rate for seeking professional help and treatment, despite the high occurrence of mental health problems. (Rickwood, Deane & Wilson, 2007). Adolescents’ lack of willingness to seek help and treatment for mental health problems is catalyzed by the anticipation of negative responses from family, friends and school staff. (Chandra & Minkovitz, 2006). A Queensland study of people aged 15–24 years showed that 39% of the males and 22% of the females reported that they would not request help for emotional or distressing problems (Donald, Dower, Lucke & Raphael, 2000). A 2010 survey of adolescents with mental health problems found that 46% described experiencing feelings of distrust, avoidance, pity and prejudice from family members. This portrays how negative family responses and attitudes impact an individual by creating a significant barrier to seeking help (Moses, 2010). Similarly, a study on adolescent depression also noted that teenagers who felt more stigmatized, particularly within the family, were less likely to seek treatment (Meredith et al., 2009). Furthermore, adolescents with unsupportive parents would struggle to pay expenses for treatment and transportation, further preventing successful treatment of the illness. Unfortunately, the generation of stigma is not unique to just family members, adolescents also report having felt discriminated by peers and even school staff (Moses, 2010). The main step to seeking help and engaging in treatment for mental illness is to acknowledge that there is a problem and to be comfortable enough to disclose this information to another person (Rickwood et al, 2005). However, in another 2010 study of adolescents, many expressed fear of being bullied by peers, subsequently leading to secrecy and shame (Kranke et al., 2010). The role of public stigma in generating this shame and denial is significant and thus can be defined as a factor in preventing adolescents from seeking support for their mental health problems. A 2001 study testing the relationship between adherence to medication (in this case, antidepressants) and perceived stigma levels determined that individuals who accepted the antidepressants were found to have lower perceived stigma levels (Sirey et al, 2001). This empirical data clearly illustrates the correlation between public stigma levels and an individual’s engagement in treatment, thus inferring that stigma remains a barrier for treatment. Public stigma can therefore be defined as a causative factor in the majority of adolescents not seeking support or treatment for their mental health problems.

One of the main strategies performed by society to assist in the reduction of the public stigma surrounding mental illness is education. Educating people about the common misconceptions of mental health challenges the inaccurate stereotypes and substitutes them with factual information (Corrigan et al., 2012). There is sufficient proof that people who have more information about mental health problems are less stigmatizing than people who are misinformed about them (Corrigan & Penn, 1999). The low cost and far-reaching nature are beneficial aspects of the educational approach. Educational approaches are often carried out on adolescents as it is believed that by educating children about mental illness, stigma can be prevented from emerging in adulthood (Corrigan et al., 2012). A 2001 study testing the effect of education on 152 students found that levels of stigmatization were lessened following the implementation of the strategy (Corrigan et al, 2001). However, it was also determined that by combining a contact based approach with the educational strategy would yield the highest levels of stigma reduction. Studies have also shown that a short educational program can be effective at reducing individuals’ negative attitudes toward mental illness and increases their knowledge on the issue (Corrigan & O’Shaughnessy, 2007). The effect of an educational strategy varies depending on what type of information is being communicated towards people. The information provided should deliver realistic descriptions of mental health problems and their causes as well as emphasizing the benefits of treatment. By delivering accurate information to people, the negative stereotypes surrounding mental illness can be decreased and the publics views on the controllability and treatment of psychological problems can be altered (Britt et al, 2007). Educational approaches mainly focus on improving knowledge and attitudes surrounding mental illness and do not focus directly on changing behavior. Therefore, a link cannot be clearly made as to whether educating people actually reduces discrimination. Although this remains a major limitation in today’s society, educating people at an early age can ensure that in the future discrimination and stigmatization will decrease. Reducing the negative attitudes surrounding mental illness can encourage those suffering from mental health problems to seek help. Providing individuals with correct information regarding the mechanisms and benefits of treatment, such as psychotherapy or drugs like antidepressants, increases their own mental health literacy and therefore increases the likelihood of seeking treatment (Jorm and Korten, 1997). People who are educated about mental health problems are less likely to believe or generate stigma surrounding mental illnesses and therefore contribute to reducing stigma which in turn will increase levels of successful treatment for themselves or other individuals.

The public stigma surrounding mental health problems is defined by negative attitudes, prejudice and discrimination. This negativity in society is very debilitating towards any individual suffering from mental illness and creates a barrier for seeking out help and engaging in successful treatment. The negative consequences of public stigma for individuals is to be excluded, not considered for a job or for friends and family to become socially distant. By educating people about the causes, symptoms and treatment of mental illnesses, stigma can be reduced as misinformation is usually a key factor in the promotion of harmful stereotypes. An individual will more likely engage in successful treatment if they are accepting of their illness and if stigma is reduced.

2016-10-9-1475973764

Frederick Douglass, Malcolm X and Ida Wells

Civil Rights are “the rights to full legal, social, and economic equality” . Following the American Civil War, slavery was officially abolished December 6th, 1865 in the United States of America (US). The Fourteenth and Fifteenth Amendments established a legal framework for political equality for African Americans; many thought that this would lead to equality between white and blacks however this was not the case. Despite slavery’s abolition Jim Crow racial segregation in the South meant that blacks would be denied political rights and freedoms and they would continue to live in poverty and inequality. It took nearly 100 years of campaigning until the Civil Rights and Voting Rights Acts were passed, making it illegal to discriminate based on race, colour, religion, sex or national origin and ensuring minority voting rights. Martin Luther King was prominent in the Modern Civil Rights Movement (CRM), playing a key role in legislative and social change. His assassination in 1968 marked the end of a distinguished life helping millions of African Americans across the US. The contribution played by black activists including political Frederick Douglass, militant Malcolm X and journalist Ida Wells throughout the period will be examined from a political, social and economic, perspective. When comparing their significance to that of King, consideration must be given to the time in which activists were operating and to prevailing social attitudes. Although King was undeniably significant it was the combined efforts of all the black activists and the mass protest movement in the mid-20th century that eventually led to African Americans gaining civil rights.

The significance of King’s role is explored through Clayborne Carson’s, ‘The Papers of Martin Luther King’ (Appendix 1). Carson, a historian at Stanford University, suggests that “the black movement would probably have achieved its major legislative victory without King’s leadership” Carson does not believe King was pivotal in gaining civil rights, but that he quickened the process. The mass public support shown in the March on Washington, 1963, suggests that Carson is correct in arguing that the movement would have continued its course without King. However, it was King’s oratory skill in his ‘I have a Dream’ speech that was most significant. Carson suggests key events would still have taken place without King. “King did not initiate…” the Montgomery bus boycott rather Rosa Parks did. His analysis of the idea of a ‘mass movement’ furthers his argument of King’s less significant role. Carson suggests that ‘mass activism’ in the South resulted from socio-political forces rather than ‘the actions of a single leader’. King’s leadership was not vital to the movement gaining support and legislative change would have occurred regardless. The source’s tone is critical of his significance but passive in the dismissal of King’s role. Phrases such as “without King” are used to diminish him in a less aggressive manner. Carson, a civil rights historian with a PhD from UCLA has written books and documentaries including ‘Eyes on the Prize’ and so is qualified to judge. The source was published in 1992 in conjunction with King’s wife, Coretta, who took over as head of the CRM after King’s assassination and extended its role to include women’s rights and LGBT rights. Although this may make him subjective, he attacks King’s role suggesting he presents a balanced view. Carson produced his work two decades after the movement and three decades before the ‘Black Lives Matter’ marches of the 21st century, and so was less politically motivated in his interpretation. The purpose of his work was to edit and publish the papers of King on behalf of The King Institute to show King’s life and the CRM he inspired. Overall, Carson argues that King had significance in quickening the process of gaining civil rights however he believes that without his leadership, the campaigning would have taken a similar course and that US mass activism was the main driving force.

In his book ‘Martin Luther King Jr.’ (Appendix 2) historian Peter Ling argues, like Carson, that King was not important to the movement but differs suggesting it was other activists who brought success and not mass activism. Ling believes that ‘without the activities of the movement’ King might just have been another ‘Baptist preacher who spoke well.’ It can be inferred that Ling believes King was not vital to the CRM and was just a good orator.

Ling’s reference to activist Ella Baker 1903-86 who ‘complained that “the movement made Martin, not Martin the Movement”’ suggests the King’s political career was of more importance to him than the goal of civil rights. Baker told King she disapproved of his being hero worshipped and others argued that he was ‘taking too many bows and enjoying them’. Baker promoted activists working together, as seen through her influence in the Student Nonviolent Coordinating Committee (SNCC). Clearly many believed King was not the only individual to have an impact on the movement, and so Ling’s argument that multiple activists were significant is further highlighted.

Finally, Ling argues that ‘others besides King set the pace for the Civil Rights Movement’ which explicitly shows how other activists working for the movement were the true heroes, they orchestrated events and activities yet it was King that benefitted. However King himself suggested that he was willing to use successful tactics suggested by others. The work of activists such as Philip Randolph who organise the 1963 March highlights how individuals played a greater role in moving the CRM forward than King. The tone attacks King using words such as ‘criticisms’ to diminish King’s role. Ling says that he has ‘sympathy’ for Miss Baker showing his positive tone towards other activists.

Ling was born in the UK studying History at Royal Holloway College and a MA in American Studies, Institute United States Studies, London. This gives Ling an international perspective, making him less subjective as he has no political motivations nevertheless this makes his interpretation limited in that he has no primary knowledge of civil rights in the US. The book was published in 2002 consequently this gives Ling hindsight making his judgment more accurate and less subjective as he is no longer affected by King’s influence. Similarly, his knowledge of American history and the CRM makes his work accurate. Unlike Carson who was a black activist and attended the 1963 March, White Ling was born in 1956 and was not involved with the CRM and so will have a less accurate interpretation. A further limitation is his selectivity; he gives no attention to the successes of King, including his inspiring ‘I had a dream speech’. As a result, it is not a balanced interpretation and thus its value is limited.

Overall, although weaker than Carson’s interpretation, Ling does give an argument that is of value when understanding King’s significance. Both revisionists, the two historians agree that King was not the most significant reason to gaining civil rights however differ on who or what they see as more important. Carson argues that mass activism was vital in success whereas Ling believes it to be other activists.

A popular pastor in the Baptist Church, King was the leader of the CRM when it gained black rights successes in the 1960s. He demonstrated the power of the church and NAACP in the pursuit of civil rights His oratory skills ensured many blacks and whites attended the protests and increased support. He understood the power of the media in getting his message to a wide audience and in putting pressure on the US government. The Birmingham campaign 1963, where peaceful protestors including children were violently attacked by police and his inspirational ‘Letter from Birmingham Jail’ that King wrote were heavily publicised. US society gradually sympathised with the black ‘victims’. Winning the Nobel Peace Prize gained the movement further international recognition. King’s leadership was instrumental in the political achievements of the CRM, inspiring the grassroots activism needed to apply enough pressure on government, which behind the scenes activists like Baker had worked tirelessly to build. Nevertheless there had been a generation of activists who played their parts often through the church publicising the movement, achieving early legislative victories and helping to kick-start the modern CRM and the idea of nonviolent civil disobedience. King’s significance is that he was the figurehead of the movement at the time when civil rights were eventually given.

Pioneering activist Frederick Douglass 1818-95 had political significance to the CRM holding federal positions which enabled him to influence government and Presidents throughout the Reconstruction era. He is often called the ‘father of the civil rights movement’. Douglass held several prominent roles including US Marshall for DC. He was the first black to hold high office in government and in 1872 the first African American nominated for US Vice President particularly significant as blacks’ involvement in politics was severely restricted at the time. Like King he was a brilliant orator, lecturing on civil rights in the US and abroad. When compared to King Douglass was significant in the CRM. He promoted equality for blacks and whites, although unlike King he did not ultimately achieve black civil rights this was because he was confined by the era that he lived.

The contribution of W.E.B Du Bois 1868-1963 was significant as he laid the foundations for future black activists, including King, to build. In 1909 he established The National Association for the Advancement of Coloured People (NAACP) the most important 20th century black organisation other than the church. King became a member of NAACP and used it to organise the bus boycott and other mass protests. As a result, the importance of Du Bois to the CRM is that King’s success depended on NAACP therefore Du Bois is of similar significance, if not more so than King in pursuing black civil rights.

Ray Stanard Baker’s article in 1908 for The American Magazine speaks of Du Bois’ enthusiastic attitude to the CRM, his intelligence and knowledge of African Americans. (Appendix 3) The quotation of Du Bois at the end of the extract reads “Do not submit! agitate, object, fight,” showing he was not passive but preaching messages of rebellion. The article describes him with vocabulary such as “critical” and “impatient” showing his radical passionate side. Baker also states Du Bois’ contrasting opinions compared to Booker T Washington one of his contemporary black activists. This is evident when it says “his answer was the exact reverse of Washington’s” demonstrating how he was different to the passive, ‘education for all’ Washington. Du Bois valued education, but believed in educating an elite few, the ‘talented tenth’ who could strive for rapid political change. The tone is positive towards Du Bois praising him for being a ferocious character dedicated to achieving civil rights. Through phrases such as “his struggles and his aspirations” this dedicated and praising tone is developed. The American Magazine founded in 1906 was an investigative US paper. Many contributors to the magazine were ‘muckraking’ journalists meaning that they were reformists who attacked societal views and traditions. As a result, the magazine would be subjective, favouring radical Du Bois’, challenging the Jim Crow South and appealing to its radical target audience. The purpose of the source was to confront the racism in the US and so would be political motivated making it subjective regarding civil rights. However some evidence suggests that Du Bois was not radical, his Paris Exposition in 1900 showed the world real African Americans. Socially he made a major contribution to black pride contributing to the black unity felt during the Harlem Renaissance. The Renaissance popularised black culture and so was a turning point in the movement, in the years after the CRM grew in popularity and became a national issue. Finally, the source refers to his intelligence and educational prowess; he carried out economic studies for the US Government and was educated at Harvard and abroad. As a result, it can be inferred that Du Bois rose to prominence and made a significant contribution to the movement due to his intelligence and his understanding of US society and African American culture. One of the founders of the NAACP his significance in attracting grassroots activists and uniting black people was vital. The NAACP leader Roy Wilkins at the March on Washington highlighted his contribution following his death the day before, and said, “his was the voice that was calling you to gather here today in this cause.” Wilkins is suggesting that Du Bois had started the process which had led to the March.

Rosa Parks 1913-2005 and Charles Houston 1895-1950 were NAACP activists who benefitted from the work of Du Bois and achieved significant political success in the CRM. Parks the “Mother of the Freedom Movement.” was the spark that ignited the modern CRM by protesting on a segregated bus. Following her refusal to move to the black area she was arrested. Parks, King and NAACP members staged a yearlong bus boycott in Montgomery. Had it not been for Parks, King may never have had the opportunity to rise to prominence or had mass support for the movement and so her activism was key in shaping King. Lawyer Houston helped defend black Americans, breaking down the deep rooted discriminative and segregation laws in the South. It was his ground-breaking use of sociological theories that formed the basis of the Brown v. the Board of Education 1954 that ended segregation in schools. Although compared to King, Houston is less prominent; his work was significant in reducing black discrimination gaining him the nickname ‘The man who killed Jim Crow ‘. Nonetheless had Du Bois’ NAACP not existed, Parks and Houston would never have had an organisation to support them in their fight, likewise King would never have gained the mass support for civil rights.

Trade unionist Philip Randolph 1890-1979 brought about important political changes. His pioneering use of nonviolent confrontation had a significant impact on the CRM and was widely used throughout 1950’s and 60’s. Randolph had become a prominent civil rights spokesman after organising the Brotherhood of Sleeping Car Porters in 1925, the first black majority union. Mass unemployment after the US Depression led to civil rights becoming a political issue and US trade unions supported equal rights and black membership grew. Randolph was striving for political change that would bring equality. Aware of his influence in 1941 he threatened a protest march which pressured President Roosevelt into issuing Executive Order 8802 an important early employment civil rights victory. There was a shift in the direction of the movement focussing on the military because after the Second World War black soldiers felt disenfranchised and became the ‘foot soldiers of the CRM’ fighting for equality in these mass protests. Randolph led peaceful protests which resulted in President Truman issuing Executive Order 9981 desegregating of the Armed Forces showing his key political significance. Significantly this legislation was a catalyst leading to further desegregation laws. His contribution to the CRM, support of King’s leadership and masterminding of the 1963 March made his significance equal to King’s.

King realised that US society needed to change and inspired by Ghandi he too used non-violent mass protest to bring about change, including the Greensboro Sit-ins to de-segregate lunch counters. Similarly activist Booker T Washington 1856-1915 significantly improved the lives of thousands of southern blacks who were poorly educated and trapped in poverty following Reconstruction through his pioneering work in black education. He founded the Tuskegee Institute. In his book ‘Up from Slavery: An Autobiography’ (Appendix 4) he suggests that gaining civil rights would be difficult and slow, but all blacks should work on improving themselves through education and hard work to peacefully push the movement forward. He says that “the according of the full exercise of political rights” will not be an “overnight gourdvine affair” and that a black should “deport himself modestly in regard of political claim”. Inferring that Washington wanted peaceful protest and acknowledged the time it would take to gain equality, making his philosophy like King’s. Washington’s belief in using education to gain the skills to improve lives and fight for equality is evident through the Tuskegee Institute which educated 2000 blacks a year.

The tone of the source is peaceful, calling for justice in the South. Washington uses words such as “modestly” in an attempt for peace and “exact justice” to show how he believes in equal political rights for all. The reliability of the source is mixed. Washington is subjective as he wants his autobiography to be read, understood and supported. The intended audience would have been anyone in the US, particularly blacks whom Washington wanted to inspire to protest and white politicians who would advance civil rights. The source is accurate, it was written in 1901, during the Jim Crow South. Washington would have been politically motivated in his autobiography; demanding legislative change to give blacks civil rights. There would have also been an educational factor that contributed to his writing, his Tuskegee Institute and educational philosophy, having a deep impact on his autobiography.

The source shows how and why the unequal South should no longer be segregated. Undoubtedly significant, as his reputation grew he became an important public speaker and is considered to have been a leading spokesman for black people and issues like King. An excellent role model a former slave who influenced statesmen he was the first black to dine with the President (Roosevelt) at the White House showing blacks they could achieve anything. Activist Du Bois described him as “the one recognised spokesman of his 10 million fellows … the most striking thing in the history of the American Negro”. Although not as decisive in gaining civil rights as King, Washington was important in preparing blacks for urban and working life but also empowering the next generation of activists.

Inspired by Washington the charismatic Jamaican radical activist Marcus Garvey 1880-1940 arrived in the US in 1916. Garvey had a social significance to the movement striving to better the lives of US blacks. He rose to prominence during the ‘Great Migration’ when poor southern blacks were moving to the industrial North, making Southern race problems into national ones. He founded the Universal Negro Improvement Association (UNIA) which had over 2,000,000 members in 1920. He appealed to discontented First World War black soldiers who had returned home to violent racial discrimination. The importance of the First World War was paramount in enabling Garvey to gain the vast support he did in the 1920s. Garvey published a newspaper, the Negro World which spread his ideas about education and Pan-Africanism, the political union of all people of African descent. Garvey like King gained a greater audience for the CRM, in 1920 he led an international convention in Liberty Hall, and 50,000 parade through Harlem. Garvey inspired later activists such as King.

2018-7-12-1531405547

Reflective essay on use of learning theories in the classroom: college application essay help

Over recent years teaching theories have been more common in the class room, all in the hope of supporting students and been able to further their knowledge by understanding their abilities and what they need to develop. As a teacher it is important to embed teaching and learning theories in the class room, therefore as teachers we can teach the students to their individual needs.

Throughout my research I will be looking in to the key differences of two different theories by comparing two theories used in class rooms today. I will also be critically analysing what the role of the teacher is in the life-long learning sector, by analysing the professional and legislative frameworks, as well as looking for a deeper understanding into classroom management, and why it is used and how to manage different class room environments, such as managing inclusion and how it is supported throughout different methods.

Overall, I will be linking this to my own teaching, at A Mind Apart (A Mind Apart, 2019). Furthermore, I will have the ability to understand about interaction within the classroom and why communication between fellow teachers and students is important.

The role of the teacher is known for been the forefront of knowledge. Therefore, this suggest that the role of the teacher is to pass their knowledge on to their students, known as a ‘chalk and talk’ approach, although this approach is outdated and there are various ways we now teach in the classroom. Walker believes that, ‘the modern teacher is facilitator: a person who assists students to learn for themselves’ (Reece & Walker 2002) I for one cannot say I fully believe in this approach, as all students have individual learning needs, and some may need more help than others. As the teacher, it is important to know the full capability of your learners, therefore lessons can be structure to the learner’s need. It is important for the lessons to involve active learning and discussions, these will help keep the students engaged and motivated during class. Furthermore, it is important to not only know what you want the students the be learning, but it is just as important that you know as the teacher, what you are teaching; it is important to be prepared and be fully involved in your own lesson, before you go in to any class, as a teacher I make my students my priority, therefore, I leave any personal issues outside the door so I am able to give my students the best learning environment they could possibly have; not only is it important to do this but keep updated on your subject specialism, I always double check my knowledge of my subject regularly, I find following this structure my lesson will normally run at a smooth pace.

Taking in to consideration the students I teach are vulnerable there may be minor interruptions. It is not only important that you as the teacher leave your issues at the door, but to make sure the room is free from distractions, most young adults have a lot of situations which are they find hard to deal with, which means you as the teacher are not only there to educate but to make the environment safe and relaxing for your students to enjoy learning. As teachers we not only have the responsibility of making sure the teaching takes place, but we also have the responsibilities of exams, qualifications and Ofsted; and as a teacher in the life-long learning sector it is also vital that you evaluate not only your learner’s knowledge, but you evaluate yourself as a teacher, therefore, you are able to improve your teaching strategies and keep up to date.

When assessing yourself and your students it is important not to wait until the end of a term to do this and evaluate throughout the whole term. Small assessments are a good way of doing this, it doesn’t always have to be a paper examination, you can equally you can do a quiz, ask questions, use various fun games, or even use online games such as Kahoot to help your students regain their knowledge. This will not only help you as a teacher understand your students’ abilities, but it will also help your students know what they need to work on for next term.

Alongside the already listed roles and responsibilities of being a teacher in the life-long learning sector, Ann gravels explains that,

‘Your main role as a teacher should be to teach your students in a way that actively involves and engages your students during every session’ (Gravells, 2011, p.9.)

Gravels passion is solely based on helping new teachers, gain the knowledge and information they need to become successful in the lifelong learning sector. Gravels’ has achieved this by writing various text books on the lifelong learning sector. Gravels’ states in her book ‘Preparing to teach in the lifelong learning sector’, (Gravells, 2011) the importance of the 13 legislation acts. Although I find each of them equally important as each other, I am going to mention the ones I am most likely to use during my teacher training with A Mind Apart.

Safeguarding vulnerable groups act (2006) – Working with young vulnerable adults, I find this act is the one I am most likely to use during my time with A Mind Apart. In summary, the Act explains the following: ‘The ISA will make all decisions about who should be barred from working with children and vulnerable adults.’ (Southglos.gov.uk, 2019)
The Equality act (2010) – As I will be working with different sex, race and disabilities in any teaching job which I encounter, I believe The Equality act (2010) is fundamental to mention. The Equality act 2010 covers discrimination under one legalisation.
Code of professional practice (2008) – This act covers all aspects of the activities we as teachers in the lifelong learning sector may encounter. Based around seven behaviours which are: Professional practice, professional integrity, respect, reasonable care, criminal offence disclosure, and reasonability during institute investigations.

(Gravells, 2011)

Although, all acts are equally important, those are the few acts I would find myself using regularly. I have listed the others below:

Children act (2004)
Copyright designs and patents act (1988)
Data protection act (1998)
Education and skills act (2008)
Freedom of information act (2000)
Health and safety at work act (1974)
Human rights act (1998)
Protection of children act (POCA) (1999)
The Further education teachers’ qualification regulations (2007)

(Gravells, 2011)

Teaching theories are much more common in classrooms today, however there are three main teaching theories which us as teachers are known for using in the classroom daily. Experiments show that we find the following theories work the best: behaviourism, cognitive constructivist, and social constructivist, taking these theories into consideration I will look at comparing skinners behaviourist theory and taking a look at Maslow (1987) ‘Hierarchy Of Needs’ which was introduced in 1954, and how I could use these theories in my teaching as a drama teacher in the life-long learning sector.

Firstly, looking in to behaviourism is mostly described as the teacher questioning and the student responds the way you want them to. Behaviourism is a theory, which in a way can take control of how the student acts/behaves, if used to its full advantage. Keith Pritchard (Language and Learning, 2019) describes behaviourism as ‘A theory of learning focusing on observable behaviours and discounting any mental activity. Learning is defined simply as the acquisition of a new behaviour.’ (E-Learning and the Science of Instruction, 2019).

An example of how behaviourism works, is best demonstrated through the work of Ivan Pavlov (Encyclopaedia Britannica, 2019) Pavlov was a physiologist during the start of the twentieth century and used a method called ‘conditioning’, (Encyclopaedia Britannica, 2019) which is a lot like the behaviourism theory. During Pavlov’s experiment, he ‘conditioned’ the dogs to make them salivate when they heard a bell ring, as soon as the dogs hear the bell, they associate it with getting fed. As a result of this the dogs were behaving exactly how Pavlov wanted them to behave, therefore they had successfully been ‘conditioned’. (Encyclopaedia Britannica, 2019)

During Pavlov’s conditioning experiment there are four main stages in the process of classical conditioning, these include,

Acquisition, which is the initial learning;
Extinction, meaning the dogs in Pavlov’s experiment may not respond, if no food is presented to them;
Generalisation, after learning a response, the dog may now respond to other stimuli, with no further training. For example: if a child falls off a bike, a injures their self, they may be frightened to get back on to the bike again. And lastly,
Discrimination, which is the opposite of generalisation, for example the dog will not respond in the same way to another stimulus as they did the first one.

Pritchard states ‘It Involves reinforcing a behaviour by rewarding it’ which is what Pavlov’s dog experiment does. Although rewarding behaviour can be good, it can also be negative, such as bad behaviour can be discouraged by punishment. The key aspects of conditioning are as follows: Reinforcement, Positive reinforcement, Negative reinforcement, and shaping. (Encyclopaedia Britannica, 2019)

Behaviourism is one of the learning theories I use in my teaching today, working at A Mind Apart, (A Mind Apart, 2019) I work with challenging young people. The A Mind Apart organisation, a performing arts foundation especially targeted at vulnerable and challenging young people, to help better their lives; hence, on the off chance that I use the behaviourism theory it will admirably inspire the students to do better. Using behaviourism with respect to the standard of improvement and reaction, behaviourism is driven by the teacher and is responsible for how the student will carry on and how it is finished. This theory came around in the early twentieth century and concentrated how individuals behave; with respect to the work I do at A Mind Apart, as a trainee performing arts teacher, I can identify with behaviourism limitlessly, every Thursday, when my 2 hour class is finished, I at that point take 5 minutes out of my lesson to award a ‘Star of the week’ It is an incredible method to urge students to carry on the way they have been, if behaving and influence them to endeavour towards something ion the future. Furthermore, I have discovered that this theory can function admirably in any expert subject and not just performing arts. The behaviourism theory is straightforward as it depends just on detectable conduct and portrays a few widespread laws of conduct. It’s positive and negative support strategies can be extremely effective. The students who we teach in general at A Mind Apart, are destined to come to us with emotional well-being issues, which is the reason most of the time these students find that it is hard to focus, or even learn in a school environment; we are there to give a comprehensive learning environment and utilize the time we have with them, so they can move forward at their own pace and take a leap at their scholarly aptitudes and socialising in the future when they leave us, to move on to college or even jobs, our work with them will also help them meet new individuals, and gain new useful knowledge by using behaviourism teaching theory. Despite the fact some of them may struggle with obstacles during their lives; although it is not always easy to manipulate someone in to thinking or behaving the way you do or want them to, with time, and persistence I have found that this theory can work. It is known that…

‘Positive reinforcement or rewards can include verbal feedback such as ‘That’s great, you’ve produced that document without any errors’ or ‘You’re certainly getting on well with that task’ through to more tangible rewards such as a certificate at the end’… (Information List of topics Assessment Becoming a teacher Continuing Professional Development (CPD) Embedding maths et al., 2019)

Gagne (Mindtools.com, 2019) was an American educational psychologist best known for his nine levels of learning; Regarding Gagne’s nine levels of learning, (Mindtools.com, 2019) I have done something in depth research, in just a couple of his nine levels of learning therefore I will be able to understand the levels and how his theory link to behaviourism.

Create an attention-grabbing introduction.
Inform learner about the objectives.
Stimulate recall of prior knowledge.
Create goal-centred eLearning content.
Provide online guidance.
Practice makes perfect.
Offer timely feedback.
Assess early and often.
Enhance transfer of knowledge by tying it into real world situations and applications.

(Mindtools.com, 2019)

Informing the learner of the objectives, is the one level I can relate to the most during my lessons, I find it important in many ways why you as the teacher, should let your students know what they are going to be learning during that specific lesson. This will help them have a better understanding throughout the lesson, as even more engage them from the very start. Linking it to behaviourism during my lessons, I tell my students what I want from them that lesson, and what I expect them, with their individual needs, to be learning or have learnt by the end of lesson. If I believe learning has taking place during my lesson, I will reward them with a game of their choice at the end of the lesson. In their mind they understand they must do as they are asked by the teacher, or the reward to play a game at the end of lesson, will be forfeited. As studies show, during Pavlov’s (E-Learning and the Science of Instruction, 2019) dog experiment that this theory does work, it can take a lot of work. I have built a great relationship with my students, and most of the time they are willing to work to the best of their ability.

Although Skinners’ (E-Learning and the Science of Instruction, 2019) behaviourist theory is based around manipulation, Maslow’s ‘Hierarchy Of Needs’ (Very well Mind, 2019) believes that behaviour and the way people act is based upon childhood events, therefore it is not always easy to manipulate in to the way you think, as they may have had a completely different upbringing, which will determine how they act. Maslow (Very well Mind, 2019) feels, if you remove the obstacles that stop the person from achieving, then they will have a better chance to achieve their goals; Maslow argues that there are five different needs which must be met in order to achieve this. The highest level of needs is self-actualisation which means the person must take full reasonability for their self, Maslow believes that people can go through to the highest levels, if they are in an education which can produce growth. Below is the table of Maslow’s ‘Hierarchy of needs’ (Very well Mind, 2019)

(Information List of topics Assessment Becoming a teacher Continuing Professional Development (CPD) Embedding maths et al., 2019)

In an explanation the table lets you know your learners needs throughout different levels, during their time in your learning environment, all learners may be at different levels, but should be able to progress on to the next one when they feel comfortable to do so. There may be knockbacks which your learners as individuals will face, but is the needs that will motivate the learning, although you may find that not all learners want to progress through the levels of learning at that moment in time, for example, if your learner if happy with the progress they have achieved so far and are content with life, they may find they want to stay at that certain level.

It is important to use the levels to encourage your learners by working up the table.

Stage 1 of the table is the physiological need – are your learners comfortable in the environment you are providing, are they hungry or thirsty? Your learners may even be tired; taking all these factors in to consideration, these may stop learning taking place. Therefore, it is important to meet all your learners’ physiological needs.

Moving up the table to safety and security – make your learners feel safe in an environment where they can relax, feel at ease. Are your learners worried about anything in particular? If so, can you help them overcome their worries.

Recognition – do your learners feel like they are part of the group? It is important to help those who don’t feel that they are part of the group bond with others. Help your learners belong and make them feel welcome. One recognition is in place your learners will then start to build their self-esteem, are they learning something useful, although your subject specialism may be second to none, it is important that your passion and drive shines through your teaching; overall this will result in the highest level: Self actualisation, are your learners achieving what they want to do? Make the sessions interesting and your learners will remember more about the subject in question. (Very well Mind, 2019)

Furthermore, classroom management comes in to force with any learning theory you use whilst teaching. Classroom management is made up of various techniques and skills that we as teacher utilize. Most of today’s classroom management systems are highly effective as they increase student success. As I am now a trainee teacher, I understand that classroom management can be difficult at times, therefore I am always researching different methods on how to manage my class. Although I don’t believe entirely that this comes from just methods, but if your pupils respect you as a teacher, and they understand what you expect of them whilst in your class, you should be able to manage the class fine; relating this with my placement at A Mind Apart, my students know what I expect of them and from that my classroom management is normally good…following this there are a few classroom management techniques I tend to follow:

Demonstrating the behaviour, you want to see – eye contact whilst talking, phones away in bags/coats, listen when been spoken to and be respectful of each other, these are all good codes of conduct to follow, and they are my main rules whilst in the classroom.
Celebrating hard work or achievements – When I think a student has done well, we as a group will celebrate their achievement, whether It be in education or out, a celebration always helps with classroom management.
Make your session engaging and motivating – This is something all us trainee teachers find difficult within our first year, as I have found out personally from the first couple of months, I have learnt to get to know your learners, understand what they like to do, and what activity’s keep them engaged.
Build strong relationships – I believe having a good relationship with your students is one of the key factors to managing a class room. It is important to build trust with your students, make them feel safe and let them know they are in a friendly environment.

When it comes to been in a classroom environment, not all students will adhere to this, therefore they may require a difference kind of structure to feel included. A key example of this is students with physical disabilities, you may need to adjust the tables or even move them out the way, you could also adjust the seating so a student may be able to see more clearly if they have hearing problems maybe write more down on the board, or even give them a sheet at the start of the lesson, which lets them know what you will be discussing and any further information they may need to know, not only do you need to take physical disabilities in to consideration but it is also important to cater for those who have behavioural problems, it is important to adjust the space to make your students feel safe whilst in your lesson.

Managing your class also means that sometimes you may have to adjust your teaching methods to suit all in your class and understand that it is important to incorporate cultural values. Whilst in the classroom, or even giving out home work you may need to take in to consideration that some students, especially those with learning difficulties, may take longer to do work, or even need additional help.

Conclusion

Research has given me a new insight into how many learning theories, teaching strategies and classroom management strategies there are, there are books and websites which help you achieve all the things you need to be able to do in your classroom. Looking back over this essay I looked in to the two learning theories that I am most likely to use.

2019-1-7-1546860682

Synchronous and asynchronous remote learning during the Covid-19 pandemic

Student’s Motivation and Engagement

Motivation plays an important role in student engagement. Saeed and Zyngier (2012) contend that in order to assess student motivation, researchers should also have to examine engagement in and as part of learning. This manifests that there is a relationship between student motivation and engagement. As support to this relationship, Hufton, Elliot, and Illushin (2002) believe that high levels of engagement show high levels of motivation. In other words, when the levels of motivation of students are high that is when their levels of engagement are also high.

Moreover, Dörnyei (2020) suggests that the concept of motivation is closely associated with engagement, and with this he asserted that motivation must be ensured in order to achieve student engagement. He further offered that any instructional design should aim to keep students engaged, regardless of the learning context, may it be traditional or e-learning. In addition, Lewis et al (2014) reveal that within the online educational environment, students can be motivated by delivering an engaging student-centered experience consistently.

In the context of Student-Teacher Dialectical Framework embedded with Self-Determination Theory, Reeve, J. (2012) reveal three newly discovered functions of student engagement. First, is that engagement bridges students’ motivation to highly valued outcomes. Second, is that student engagement affects the future quality of learning environment especially in the flow of instruction, the external events it has, and the teacher’s motivating style. Third, is that student engagement changes motivation, which means that engagement cause changes in motivation in the future. This highlights that student motivation is both a cause and a consequence. This assertion that engagement can cause changes motivation is embedded on the idea that students can take actions to meet their own psychological needs and enhance the quality of their motivation. Further, Reeve, J. (2012) asserts that students can be and are architects of their own motivation, at least to the extent that they can be architects of their own course-related behavioral, emotional, cognitive, and agentic engagement.

Synchronous and Asynchronous Learning

The COVID-19 pandemic brought a great disaster on the education system around the world. Schools have struggled due to the situation in which led them to cessation of classes for an extended period of time and other restrictive measures that later on impede the continuance of face-to face classes. In consequence, there is a massive change towards the educational system around the world while educational institutions strive and put their best efforts to resolve the situation. Many schools had addressed the risks and challenges in continuing education amidst the crisis by shifting conventional or traditional learning into distance learning. Distance learning is a form of education through the support of technology that is conducted beyond physical space and time (Papadopulou, 2020). Distance learning is an online education that provides opportunities towards educational advancement and learning development among learners worldwide. In order to sustain the educational goal of our country, distance learning is a new way of providing quality education as much as possible among public and private institutions especially to those pursing in higher education. The instructional delivery in considering distance education can be through synchronous or asynchronous mode of learning, in which students can engage and continually attain quality education despite of the pandemic situation.

Based on the definition of Easy LMS Company (2020), synchronous learning refers to a learning event in which a group of participants is engaged in learning at the same time (e.g., zoom meeting, web conference, real- time class) while asynchronous learning refers to the opposite, in which the instructor, the learner, and other participants are not engaged in the learning process at the same time. Thus, there is no real-time interaction with other people (e.g., pre-recorded discussions, self- paced learning, discussion boards). According to article issued by University of Waterloo (2020), synchronous learning is a form of learning that is live presentation which allows the students to ask questions while asynchronous can be a recorded presentation that allows students to have time in reflecting before asking questions. Synchronous learning is a typical meeting of students in a virtual setting and there is a class discussion where everybody can participate actively. Asynchronous learning is the utilization of learning platform or portal where the teachers or instructors can post and update lessons or activities and student can work at their own pace. These types of class instruction are commonly observed in these times and students have their own preferences when it comes to what best works for them.

In comparing both of the types of learning, it is valuable to know the advantages and disadvantages in order to see how it will really be an impact towards students. Wintemute (2021) discussed synchronous learning has greater engagement and direct communication is present, but it requires strong internet connection. On the other hand, asynchronous learning is advantageous in schedule flexibility and more accessible, yet it is less immersive and the challenges in procrastination, socialization and distraction are present. Students in synchronous learning tend to adapt the changes of learning with classmates in a virtual setting while asynchronous learning introduced a new setting where students can choose when to study.

In the middle of the crisis, asynchronous learning can be more favorable than synchronous because most of us are struggling in this pandemic. One of the principal advantages of asynchronous online learning is that it offers more flexibility, allowing learners to set their own schedule and work at their own pace (Anthony and Thomas, 2020). In contrast, synchronous learning allows students to feel connected in a virtual world and it can give them assurance of not being isolated amidst studying lessons because they can have a live interactions and exchange of ideas and other valuable inputs for the class to understand the lessons well by the help of teachers. The main advantages of synchronous learning are that instructors can explain specific concepts when students are struggling and students can also get immediate answers about their concerns in the process of learning (Hughes, 2014). In the article of Delgado (2020), the advantages and disadvantages will not be effective if they do not have a pedagogical methodology considering the technology and its optimization. Furthermore, the quality of learning depends on good planning and design by reviewing and evaluating each type of learning modality.

Synthesis

Motivating students has been a key challenge facing instructors in the contexts of online learning (Zhao et. al 2016). In which motivation is one of the bases of the student to do well in their studies. When students are motivated, the outcome is a good mark. In short, motivation is a way to pushed them study more to get high grades. According to Zhao (2016) motivation in an online learning environment revealed that there are learning motivation differences among students from different cultural backgrounds. Motivation is described as “the degree of people’s choices and the degree of effort they will put forth” (Keller, 1983). Learning is closely linked to motivation because it is an active process that necessitates intentional and deliberate effort. Educators must build a learning atmosphere in which students are highly encouraged to participate both actively and productively in learning activities if they want to get the most out of school (Stipek, 2002). John Keller (1987) in his study revealed that attention and motivation will not be maintained unless the learner believes the teaching and learning are relevant. According to Zhao (2016), a strong interest in a topic will lead to mastery goals and intrinsic motivation.

Engagement can be perceived with the interaction between students and teachers in online classes. Student engagement, according to Fredericks et al. (2004), is a meta-construct that includes behavioral, affective, and mental involvement. Despite the fact that there is a broad body of literature on behavioral (i.e., time on task), emotional (i.e., interest and value), and cognitive engagement (i.e., self-regulation and learning strategies), what sets engagement apart is its capacity as a multifaceted strategy. While there is substantial research on behavioral (i.e., time on task), emotional (i.e., interest and value), and cognitive engagement (i.e., self-regulation and learning strategies what distinguishes engagement is its ability as a multidimensional or “meta”-construct that encompasses all three dimensions.

Motivation plays an important role in student engagement. Saeed and Zyngier (2012) contend that in order to assess student motivation, researchers should also have to examine engagement in and as part of learning.

Lewis et al (2014) reveal that within the online educational environment, students can be motivated by delivering an engaging student-centered experience consistently.

In the context of Student-Teacher Dialectical Framework embedded with Self-Determination Theory, Reeve, J. (2012) reveal three newly discovered functions of student engagement. First, is that engagement bridges students’ motivation to highly valued outcomes. Second, is that student engagement affects the future quality of learning environment especially in the flow of instruction, the external events it has, and the teacher’s motivating style. Third, is that student engagement changes motivation, which means that engagement cause changes in motivation in the future. Distance learning is an online education that provides opportunities towards educational advancement and learning development among learners worldwide. In order to sustain the educational goal of our country, distance learning is a new way of providing quality education as much as possible among public and private institutions especially to those pursing in higher education. The instructional delivery in considering distance education can be through synchronous or asynchronous mode of learning, in which students can engage and continually attain quality education despite of the pandemic situation.

According to article issued by University of Waterloo (2020), synchronous learning is a form of learning that is live presentation which allows the students to ask questions while asynchronous can be a recorded presentation that allows students to have time in reflecting before asking questions. Synchronous learning is a typical meeting of students in a virtual setting and there is a class discussion where everybody can participate actively. Asynchronous learning is the utilization of learning platform or portal where the teachers or instructors can post and update lessons or activities and student can work at their own pace. These types of class instruction are commonly observed in these times and students have their own preferences when it comes to what best works for them.

In comparing both of the types of learning, it is valuable to know the advantages and disadvantages in order to see how it will really be an impact towards students. Wintemute (2021) discussed synchronous learning has greater engagement and direct communication is present, but it requires strong internet connection. On the other hand, asynchronous learning is advantageous in schedule flexibility and more accessible, yet it is less immersive and the challenges in procrastination, socialization and distraction are present.

In the middle of the crisis, asynchronous learning can be more favorable than synchronous because most of us are struggling in this pandemic. One of the principal advantages of asynchronous online learning is that it offers more flexibility, allowing learners to set their own schedule and work at their own pace (Anthony and Thomas, 2020). In contrast, synchronous learning allows students to feel connected in a virtual world and it can give them assurance of not being isolated amidst studying lessons because they can have a live interactions and exchange of ideas and other valuable inputs for the class to understand the lessons well by the help of teachers.

2022-1-8-1641647078

‘Peak Oil’ – what are the solutions?

The ability to harness energy sources and put them towards a productive use has played a crucial role in economic development worldwide. Easily accessible oil helped to fuel continued expansion in the 20th century. Agricultural production was transformed by motorised farm equipment and petroleum-based fertilisers and pesticides. Cars, trucks and airplanes powered by oil products revolutionised the transportation of people and goods. Oil provides fuel for home heating, electricity production, and to power industrial and agricultural equipment. It also provides the source material for the construction of plastics, many fertilisers and pesticides and many industrial chemicals and materials. It is now difficult to find any product that does not require the use of oil at some point in the production process.

Oil has several advantages over other fossil fuels: it is easily transportable and energy-dense, and when refined it is suitable for a wide variety of uses. Considering the important role that oil plays in our economy, if persistent shortages were to emerge, the economic implications could be enormous. However, there is no consensus as to how seriously the treat of oil resources depletion should be taken. Some warn of a colossal societal collapse in the not-too-distant future, while others argue that technological progress will allow us to shift away from oil before resource depletion becomes an issue.

How much of a problem oil depletion poses depends on the amount of oil that remains accessible at reasonable cost, and how quickly the development of alternatives allows the demand for oil to be reduced. This is what the term ‘peak oil’ means the point of when the demand for oil outstrips the availability. Demand and supply each evolve over time following a pattern that is based in historical data, while supply is also constrained by resource availability. There is no mechanism for market on its own to address concerns about climate change. However, if policies are put in place to price the costs of climate change into the price of fossil fuel consumption, then this should trigger market incentives that should lead efficiently to the desired emission reductions.

A while ago the media was filed with stories about peak oil and it was even in an episode of the Simpsons. Peak oil in basic term means that the point we have used all the easy to extract oil and are only left with the hard to reach which in term is expensive to refine. There is still a huge amount of debate amongst geologist and Petro- industries experts about how much oil is left in the ground. However, since then the idea of a near-term peak in the world oil supplies has been discredited. The term that is now used is Peak Oil demand, the idea is that because of the proliferation of electric cars and other sources of energy means that demand for oil will reach a maximum and start to decline and indeed consumptions levels in some parts of the world have already begun to stagnate.

The other theory that has been produce is that with supply beginning to exceed demand there is not enough investment going into future oil exploration and development. Without this investment production will decline but production is not declining due to supply problems just that we are moving into an age of oil abundance and the decline in oil production seen if because of other factors. There has been an explosion of popular literature recently predicting that oil production will peak soon, and that oil shortages will force us into major lifestyle changes in the near future- a good example of this is Heinberg (2003). The point at which oil production reaches a peak and begins to decline permanently has been referred to as ‘Peak Oil’. Predictions for when this will occur range from 2007 and 2025 (Hirsch 2005)

The Hirsch Report of 2005 concluded that it would take a modern industrial nation such as the UK or the United States at least a full decade to prepare for peak oil. Since 2005 there has been some movement towards solar and wind power together with more electric cars but nothing that deals with the scale of the problem. This has been compounded by Trump coming to power in the United States and deciding to throw the energy transition into reverse, discouraging alternative energy and expanding subsidies for fossil fuels.

What is happening how

Many factors are reported in news reports to cause changes in oil prices: supply disruptions from wars and other political factors, from hurricanes or from other random events; changes in demand expectations based on economic reports, financial market events or even weather in areas where heating oil is used; changes in the value of the dollar; reports of inventory levels, etc. these are all factors that will affect the supply and demand for oil, but they often influence the price of oil before they have any direct impact on the current supply or demand for crude oil. Last year, the main forces pushing the oil market higher were the agreement by OPEC and its partners to lower production and the growth of global demand. This year, an array of factors are pressuring the oil markets: the US sanctions that threaten to cut Iranian oil production from Venezuela. Moreover, there are supply disruptions in Libya, the Canadian tar sands, Norway and Nigeria that add to the uncertainties as does erratic policymaking in Washington, complete with threats to sell off part of the US strategic reserve and a weaker dollar. Goldman Sachs continues to expect that Brent Crude prices could retest $80 a barrel this year, but probably only late in 2018. “production disruptions and large supply shifts driven by US political decisions are the drivers of this new volatility, with demand remaining robust so far” Brent Crude is expected to trade in the $70-$80 a barrel range in the immediate future.

The OPEC

Saudi Arabia-and Russia-had started to raise production even before the 22 June 2018 meeting with OPEC that sought to address the shrinking global oil supply and rising prices. OPEC had over-complying with the cuts agreed to at the November 2016 meeting thanks to additional cuts from Saudi Arabia and Venezuela. The June 2018 22nd meeting decided to increase production to more closely reflect the production cut agreement. After the meeting, Saudi Arabia pledged a “measurable” supply boost but gave no specific numbers. Tehran’s oil minister warned his Saudi Arabian counterpart that the June 22nd revision to the OPEC supply pact do not give member countries the right to raise oil production above their targets. The Saudis, Russia and several of the Gulf Arab States increased production in June but seem reluctant to expand much further. During the summer months, the Saudis always need to burn more raw crude in their power station to combat the very high temperatures of their summer.

US Shale oil production

According to the EIA’s latest drilling productivity Report, US unconventional oil production is projected to rise by 143,000 b/d in August to 7.470 billion b/d. The Permian Basin is seen as far outdistancing other shale basins in monthly growth in August, at 73,000 b/d to 3,406 million b/d. However, drilled but uncompleted (DUC) wells in the Permian rose 164 in June to 3,368, one of the largest builds in recent months. Total US DUCs rose by 193 to 7,943 in June. US energy companies last week cut oil rigs the most in a week since March as the rate of growth had slowed over the past month or so with recent declines in crude prices. Included with other optimistic forecast for US shale oil was the caveat that the DUC production figures are sketchy as current information is difficult for the EIA to obtain with little specific data being provided to Washington by E&Ps or midstream operators. Given all the publicity surrounding constraints on moving oil from the Permian to market, the EIA admits that it “may overestimate production due to constraints.”

The Middle East and North Africa

Iran

Iran’s supreme leader, Ayatollah Ali Khamenei, called on state bodies to support the government of president Hassan Rouhani in fighting US economic sanctions. The likely return of US economic sanctions has triggered a rapid fall of Iran’s currency and protests by bazaar traders usually loyal Islamist rulers, and a public outcry over alleged price gouging and profiteering. The speech to member of Rouhani’s cabinet is clearly aimed at the conservative elements in the government who have been critical of the President and his policies of cooperation with the West and a call for unity in a time that seems likely to be one of great economic hardship spread to more than 80 Iranian cities and towns. At least 25 people died in the unrest, the most significant expression of public corruption, but the protest took on a rare political dimension, with growing number of people calling on supreme leader Khamenei to step down. Although there is much debate over the effectiveness of the impending US sanctions, some analysts are saying that Iran’s oil exports could fall by as much as two-thirds by the end of the year putting oil markets under massive strain amid supply outages elsewhere in the world. Some of the worst-case scenarios are forecasting a drop to only 700,000 b/d with most of Tehran’s exports going to China, and smaller chares going to India, Turkey and other buyers with waivers. China, the biggest importer of Iranian oil at 650,000 b/d according to Reuters trade flow data, is likely to ignore US sanctions.

Iraq

Iraq’s future is again in trouble as protests erupt across the country. These protests began in southern Iraq after the government was accused of doing nothing to alleviate a deepening unemployment crisis, water and electricity shortages and rampant corruption. The demonstrations are spreading to major population centers including Najaf and Amirah, and now discontent is stirring in Baghdad. The government has been quick to promise more funding and investment in the development of chronically underdeveloped cities, but this has done little to quell public anger. Iraqis have heard these promises countless times before, and with a water and energy crisis striking in the middle of scorching summer heat, people are less inclined to believe what their government says. The civil unrest had begun to diminish in southern Iraq, leaving the country’s oil sector shaken but secure-though protesters have vowed to return. Operations at several oil fields have been affected as international oil companies and service companies have temporality withdrawn staff from some areas that saw protests. The government claims that the production and exporting oil has remained steady during the protests. With Iran refusing to provide for Iraq’s electricity needs, Baghdad has now also turned to Saudi Arabia to see if its southern Arab neighbor can help alleviate the crises it faces.

Saudi Arabia

The IPO has been touted for the past two years as the centerpiece of an ambitious economic reform program driven by crown prince Mohammed bin Salman to diversify the Saudi economy beyond oil. Saudi Arabia expects its crude exports to drop by roughly 100,000 b/d in August as the kingdom tries to ensure it does not push oil into the market beyond its customers’ needs.

Libya

Reopened its eastern oil ports and started to ramp up production from 650,000 to 700,000 and is expected to rise further after shipments resume at eastern ports that re-opened after a political standoff.

China

China’s economy expanded by 6.7 percent its slowest pace since 2016. The pace of annual expansion announced is still above the government’s target of “about 6.5 percent” growth for the year, but the slowdown comes as Beijing’s trade war with the US adds to headwinds from slowing domestic demand. The gross domestic product had grown at 6.8 percent in the previous three quarters. Higher oil prices play a role in the slowing of demand, but the main factor is higher taxes on independent Chinese refiners, which is already cutting into the refining margins and profits of the ‘teapots’ who have grown over the past three years to account fir around fifth of China’s total crude imports. Under the stricter tax regulations and reporting mechanisms effective 1 March, however, the teapots now can’t avoid paying a consumption tax on refined oil products sales- as they did in the past three years- and their refining operations are becoming less profitable.

Russia

Russia oil production rose by around 100,000 b/d from May. From July 1-15 the country’s average oil output was 11.215 million b/d an increase of 245,000 b/d from May’s production. Amid growing speculation that President Trump will attempt to weaken US sanctions on Russia’s oil sector, US congressional leaders are pushing legislation to strengthen sanctions on Russian export pipelines and joint ventures with Russian oil and natural gas companies. Ukraine and Russia said they would hold further European Union-mediated talks on supplying Europe with Russian gas, in a key first step towards renewing Ukraine’s gas transit contract that expires at the end of next year.

Venezuela

Venezuela’s Oil Minister Manuel Quevedo has been talking about plans to raise the country’s crude oil production in the second half of the year. However, no one else thinks or claims that Venezuela could soon reverse its steep production decline which has seen it losing more than 40,000 b/d of oil production every month for several months now. According to OPEC’s secondary sources in the latest Monthly Oil Market Report, Venezuela’s crude oil production dropped in June by 47,500 b/d from May, to average 1.340 million b/d in June. During a collapsing regime, widespread hunger, and medical shortages, President Nicolas Maduro continues to grant generous oil subsidies to Cuba. It is believed that Venezuela continues to supply Cuba with around 55,000 barrels of oil per day, costing the nation around $1.2 billion per year.

Alternatives to Oil

In its search for secure, sustainable and affordable supplies of energy, the world is turning its attention to unconventional energy resources. Shale gas is one of them. It has turned upside down the North-American gas markets and is making significant strides in other regions. The emergence of shale gas as a potentially major energy source can have serious strategic implications for geopolitics and the energy industry.

Uranium and Nuclear

The nuclear industry has a relatively short history: the first nuclear reactor was commissioned in 2945. Uranium is the main source of fuel for nuclear reactors. Worldwide output of uranium has recently been on the rise after a long period of declining production caused by uranium resources have grown by 12.5% since 2008 and they are sufficient for over 100 years of supply based on current requirements.

Total nuclear electricity production has been growing during the past two decades and reached an annual output of about 2,600TWh by mid-2000s, although the three major nuclear accidents have slowed down or even reversed its growth in some countries. The nuclear share of total global electricity production reached its peak of 17% by the late 1980s, but since then it has been falling and dropped to 13.5% in 2012. In absolute terms, the nuclear output remains broadly at the same level as before, but its relative share in power generation has decreased, mainly due to Fukushima nuclear accident.

Japan used to be one of the countries with high share of nuclear (30%) in its electricity mix and high production volumes. Today, Japan has only two of its 54 reactors in operation. The rising costs of nuclear installations and lengthy approval times required for new construction have had an impact on the nuclear industry. The slowdown has not been global, as new countries, primarily in the rapidly developing economies in the Middle East and Asia, are going ahead with their plans to establish a nuclear industry.

Hydro Power

Hydro power provides a significant amount of energy throughout the world and is present in more than 100 countries, contributing approximately 15% of the global electricity production. The top 5 largest markets for hydro power in terms of capacity are Brazil, Canada, China, Russia and the United States of America. China significantly exceeds the other, representing 24% of global installed capacity. In several other countries, hydro power accounts for over 50% of all electricity generation, including Iceland, Nepal and Mozambique for example. During 2012, an estimated 27-30GW of new hydro power and 2-3GW of pumped storage capacity was commissioned.

In many cases, the growth in hydro power was facilitated by the lavish renewable energy support policies and CO2 penalties. Over the past two decade the total global installed hydro power capacity has increased by 55%, while the actual generation by 21%. Since the last survey, the global installed hydro power capacity has increased by 8%, but the total electricity produced dropped by 14%, mainly due to water shortages.

Solar PV

Solar energy is the most abundant energy resource and it is available for use in its direct (solar radiation) and indirect (wind, biomass, hydro, ocean etc.) forms. About 60% of the total energy emitted by the sun reaches the Earth’s surface. Even if only 0.1% of this energy could be converted at an efficiency of 10%, it would be four times larger than the total world’s electricity generating capacity of about 5,000GW. The statistics about solar PV installations are patchy and inconsistent. The table below presents the values for 2011 but comparable values for 1993 are not available.

The use of solar energy is growing strongly around the world, in part due to the rapidly declining solar panel manufacturing costs. For instance, between 2008-2011 PV capacity has increased in the USA from 1,168MW to 5,171MW, and in Germany from 5,877MW to 25,039MW. The anticipated changes in national and regional legislation regarding support for renewables is likely to moderate this growth.

Conclusion

The rapid consumption of fossil fuels has contributed to environmental damage, the use of these fuels including oil releases chemicals that contribute to smog, acid rain, mercury contamination and carbon dioxide emissions from fossil fuel consumption are the main drivers of climate change, the effects of which are likely to become more and more severe as temperature rise. The depletion of oil and other fossil resources leaves less available to future generations and increases the likelihood of price spikes if demand outpaces supply.

One of the most intriguing conclusions from this idea is that this new “age of abundance” could alter behavior from oil producers. In the past some countries (notably OPEC members) restrained output husbanding resources for the future, betting that scarcity would increase the value of their holdings over time. However, if a peak in demand looms just over the horizon, oil producers could rush to maximize their production in order to get as much value for their reserves while they can. Saudi oil minister Sheikh Ahmed Zaki Yamani was famously quoted as saying, “the Stone Age didn’t end for lack of stone, and the oil age will end long before the world runs out of oil.” This quote reflects the view that the development of new technologies will lead to a shift away from oil consumption before oil resources are fully depleted. Nine of the ten recessions between 1946 and 2005 were preceded by spikes in oil prices and the latest recession followed the same pattern.

Extending the life of oil fields, let alone investing in new ones, will require large volumes of capital, but that might be met with skepticism from wary investors when demand begins to peak. It will be difficult to attract investment to a shrinking industry, particularly if margins continued to get squeezed. Peak demand should be an alarming prospect for OPEC, Russia and the other major oil producing countries. Basically, any and all oil producers who will find themselves fighting more aggressively for a shrinking market.

The precise data at which oil demand hits a high point and then enters into decline has been the subject of much debate, and a topic that has attracted a lot of interest just in the last few years. Consumption levels in some parts of the world have already begun to stagnate, and more and more automakers have begun to ratchet up their plans for electric vehicles. But the exact date the world will hit peak demand misses the whole point. The focus shouldn’t be on the date at which oil demand peaks, but rather the fact that the peak is coming. In other words, oil will be less important when it comes to fueling the global transportation system, which will have far-reaching consequences for oil producers and consumers alike. The implications of a looming peak in oil consumptions are massive. Without an economic transformation, or at least serious diversification, oil-producing nations that depend on oil revenues for both economic growth and to finance public spending, face an uncertain future.

2018-9-21-1537537682

Water purification and addition of nutrients as disaster relief: college application essay help

1. Introduction

1.1 Natural Disasters

Natural disasters are naturally occurring events that threaten human lives and causes damage to property. Examples of natural disasters include hurricanes, tsunamis, earthquakes, volcanic eruptions, typhoons, droughts, tropical cyclones and floods. (Pask, R., et al (2013)). They are inevitable and oftentimes, can cause calamitous implications such as water contamination and malnutrition, especially to developing countries like the Philippines, which is particularly prone to typhoons and earthquakes. (Figure 1)

Figure 1 The global distribution of natural disaster risk (The United Nations University World Risk Index 2014)

1.1.1 Impacts of Natural Disaster

The globe faces impacts of natural disasters on human lives and economy on an astronomical scale. According to a 2014 report by the United Nations, since 1994, 4.4 billion people have been affected by disasters, which claimed 1.3 million lives and cost US$2 trillion in economic losses. Developing countries are more likely to suffer a greater impact from natural disasters than developed countries as natural disasters affect the number of people living below the poverty line, and increase their numbers by more than 50 percent in some cases. Moreover, it is expected that by 2030, up to 325 million extremely poor people will live in the 49 most hazard-prone countries. (Child Fund International. (2013, June 2)) Hence, it necessitates the need for disaster relief to save the lives of those affected, especially those in developing countries such as the Philippines.

1.1.2 Lack of access to clean water

After a natural disaster strikes, severe implications such as water contamination occurs.

Besides, natural disasters know no national borders of socioeconomic status. (Malam, 2012) For example, Hurricane Katrina, which struck New Orleans, a developed city, destroyed 1,200 water systems, and 50% of existing treatment plants needed rebuilding afterwards. (Copeland, 2005) This led to the citizens of New Orleans having a shortage of drinking water. Furthermore, after the 7.0 magnitude earthquake that struck Haiti, a developing country, in 2012, there was no plumbing left underneath Port-Au-Prince, and many of the water tanks and toilets were destroyed. (Valcárcel, 2010) These are just some of the many scenarios of can bring about water scarcity.

The lack of preparedness to prevent the destruction caused by the natural disaster and the lack of readiness to respond claims to be the two major reasons for the catastrophic results of natural disasters. (Malam, 2012) Hence, the aftermath of destroyed water systems and a lack of water affect all geographical locations regardless of its socioeconomic status.

1.2 Disaster relief

Disaster relief organisations such as The American Red Cross help countries that are recovering from natural disasters by providing these countries with the basic necessities.

After a disaster, the Red Cross works with community partners to provide hot meals, snacks and water to shelters or from Red Cross emergency response vehicles in affected neighborhoods. (Disaster Relief Services | Disaster Assistance | Red Cross.)

The International Committee of the Red Cross/Red Crescent (ICRC) reported that its staff had set up mobile water treatment units. These were used to distribute water to around 28,000 people in towns along the southern and eastern coasts of the island of Samar, and to other badly-hit areas including Basey, Marabut and Guiuan. (Pardon Our Interruption. (n.d.))

Figure 2: Children seeking help after a disaster(Pardon Our Interruption. (n.d.))

Figure 3: Massive Coastal Destruction from Typhoon Haiyan (Pardon Our Interruption. (n.d.))

1.3 Target audience: Tacloban, Leyte, The Philippines

As seen in figures 4 and 5, Tacloban is the provincial capital of Leyte, a province in the Visayas region in the Philippines. It is the most populated region in the Eastern Visayas region, with a total population of 242,089 people as of August 2015. (Census of Population, 2015)

Figure 4: Location of Tacloban in the Philippines (Google Maps)

Figure 5: Location of Tacloban in the Eastern Visayas region (Google Maps)

Due to its location on the Pacific Ring of Fire (Figure 6), more than 20 typhoons (Lowe, 2016) occur in the Philippines each year.

Figure 6: The Philippines’ position on the Pacific Ring of Fire (Mindoro Resources Ltd., 2004)

In 2013, Tacloban was struck by Super Typhoon Haiyan, locally known as ‘Yolanda’. The Philippine Star, a local digital news organisation, reported more than 30,000 deaths from that disaster alone. (Avila, 2014) Tacloban is in shambles after Typhoon Haiyan and requires much aid to restore the affected area, especially when the death toll is a whopping five figure amount.

1.4 Existing measures and their gaps

Initially, there was a slow response of the government to the disaster. For the first three days after the typhoon hit, there was no running water and dead bodies were found in wells. In desperation for water to drink, some even smashed pipes of the Leyte Metropolitan Water District. However, even when drinking water was restored, it was contaminated with coliform. Many people thus became ill and one baby died of diarrhoea. (Dizon, 2014)

Long response-time by the government, (Gap 1) and further consequences were borne by the restoration of water brought (Gap 2). The productivity of people was affected and hence there is an urgent need for a better solution to the problem of late restoration of clean water.

1.5 Reasons for Choice of Topic

There is high severity since ingestion of contaminated water is the leading cause of infant mortality and illness in children (International Action, n.d.) and more than 50% of the population is undernourished. (World Food Programme, 2016). Much support and humanitarian aid has been given by organisations such as World Food Programme and The Water Project, yet more efforts are needed to lower the death rates, thus showing the persistency. It is also an urgent issue as malnourishment mostly leads to death and the children’s lives are threatened.

Furthermore, 8 out of 10 of the world’s cities most at risk to natural disasters are in the Philippines. (Reference to Figure _)Thus, the magnitude is huge as there is high frequency of natural disasters. While people are still recovering from the previous one, another hit them, thus worsening the already severe situation.

Figure _ Top 5 Countries of World Risk Index of Natural Disasters 2016 (Source: UN)

WWF CEO Jose Maria Lorenzo Tan said that “on-site desalination or purification” would be a cheaper and better solution to the lack of water than shipping in bottled water for a long period of time. (Dizon, 2014) Instead of relying on external humanitarian aid, which might incur a higher amount of debt as to relying on oneself for water, this can cushion the high expenses of rebuilding their country. Hence, there is a need for a water purification plant that provides potable water immediately when a natural disaster strikes. The plant will also have to provide cheap and affordable water until water systems are restored back to normal.

Living and growing up in Singapore, we have never experienced natural disasters first hand. We can only imagine the catastrophic destruction and suffering that accompanies natural disasters. With “Epione Solar Still” (named after the greek goddess of the Soothing of Pain), we hope to be able to help many Filipinos access clean and drinkable water, especially children who clearly do not deserve to experience such tragedy and suffering.

1.6 Case study: Disaster relief in Japan

Located at the Pacific Ring of Fire, Japan is vulnerable to natural disasters such as earthquakes, tsunami, volcanic eruptions, typhoons, floods and mudslides due to its geographical location and natural conditions. (Japan Times, 2016)

In 2011, an extremely high 9.0 magnitude earthquake hit Fukushima, causing a tsunami that destroyed the northeast coast and killed 19,000 people. It was the worst-hit earthquake in Japan in history, and it damaged the Fukushima plant and caused nuclear leakage, leading to contaminated water which currently exceeds 760,000 tonnes. (The Telegraph, 2016) The earthquake and tsunami caused a nuclear power plant to fail, and radiation to leak into the ocean and escape into the atmosphere. Many evacuees have still not returned to their homes, and, as of January 2014, the Fukushima nuclear plant still poses a threat, according to status reports by the International Atomic Energy Agency. (Natural Disasters & Pollution | Education – Seattle PI. (n.d.))

Disaster Relief

In the case of major disasters, the Japan International Cooperation Agency (JICA) deploys Japan Disaster Relief (JDR) teams, consisting of the rescue, medical, expert and infectious disease response teams and also the Self-Defence Force (SDF) to provide relief aid to affected countries. It provides emergency relief supplies such as blankets, tents and water purifiers and some are also stockpiled as reserved supplies in places closer to disastrous areas in case disasters strike there and emergency disaster relief is needed. (JICA)

For example during the Kumamoto earthquake in 2016, 1,600 soldiers had joined the relief and rescue efforts. Troops were delivering blankets and adult diapers to those in shelters. With water service cut off in some areas, residents were hauling water from local offices to their homes to flush toilets. (Japan hit by 7.3-magnitude earthquake | World news | The Guardian. (2016, April 16))

Solution to Fukushima water contamination

Facilities are used to treat contaminated water. The main one is the Multi-nuclide Removal Facility (ALPS) (Figure _), which could remove most radioactive materials except Tritium. (TEPCO, n.d)

Figure _: Structure of Multi-nuclide Removal Facility (ALPS) (TEPCO, n.d)

1.7 Impacts of Case Study

The treatment of contaminated water is very effective as more than 80% of contaminated water stored in tanks has been decontaminated and more than 90% of radioactive materials has been removed during the process of decontamination by April 2015. (METI, 2014)

1.8 Lessons Learnt

Destruction caused by natural disasters results in a lack of access to clean and drinkable water (L1)

Advancements in water purification technology can help provide potable water for the masses. (L2)

Natural disasters weaken immune systems, people are more vulnerable to the diseases (L3)

1.9 Source of inspiration

Suny Clean Water’s solar still, is made with cheap material alternatives, which would help to provide more affordable water for underprivileged countries.

A fibre-rich paper is coated with carbon black(a cheap powder left over after the incomplete combustion of oil or tar) and layered over each section of a block of polystyrene foam which is cut into 25 equal sections. The foam floats on the untreated water, acting as an insulating barrier to prevent sunlight from heating up too much of the water below. Then, the paper wicks water upward, wetting the entire top surface of each section. This causes a clear acrylic housing to sit atop the styrofoam. (Figure _)

Figure _: How fibre-rich paper coated with carbon black is adapted into the solar still. (Sunlight-powered purifier could clean water for the impoverished | Science | AAAS. (2017, February 2)

It is estimated that the materials needed to build it cost roughly $1.60 per square meter, compared with $200 per square meter for commercially available systems that rely on expensive lenses to concentrate the sun’s rays to expedite evaporation.

1.10 Application of Lessons Learnt

Gaps in current measures

Learning points

Applications to project

Key features in proposal

Developing countries lack the technology / resources to treat their water and provide basic necessities to their people.

Advanced technology can provide potable water readily. (L2)

Need for technology to purify contaminated water.

Solar Distillation Plant

Even with purification of water, problem of malnutrition which is worsened by natural disasters, is still unsolved.

Solution to provide vitamins to young children to boost immunity and lower vulnerability to diseases and illnesses. (L3)

Need for nutrient-rich water.

Nutrients infused into water using concept of osmosis.

Even with the help of external organisations, less than 50% of households have access to safe water.

Clean water is still inaccessible to some people. (L1)

Increase accessibility to water.

Evaporate seawater (abundant around Phillipines) in solar still. (short-term solution)

Figure _: Table of application of lessons learnt

2. Project Aim and Objectives

2.1 Aim

Taking into account the loopholes that exist in current measures adopted to improve water purification to reduce water pollution and malnutrition in Ilocos Norte, our project proposes a solution to provide Filipinos with clean water by creating an ingenious product, the Epione Solar Still. The product makes use of natural occurrences (evaporation of water), and adapts and incorporates the technology and mechanism behind the kidney dialysis machine to provide Filipinos with nutrient-enriched water without polluting their environment. The product will be located near water bodies where seawater is abundant to act as a source of clean water to the Filipinos.

2.2 Objectives of Project

To operationalise our aim, our objectives are to:

Design “Epione Solar Still”

Conduct interviews with:

Masoud Arfand, from Department of Mechanical Engineering, Najafabad Branch, Islamic Azad University to determine the projected percentage of water that Epione Solar Still can produce and the number of people it can provide for.

Qiaoqiang Gan, electrical engineer from Sunny Clean Water (his team innovated the technology of using fibre-rich paper is coated with carbon black to make process of water purification using the soalr still faster and more cost-friendly) to determine amount of time Epione Solar Still needed to produce sufficient water needed to support Fillipinos in Tacloban, Leyte as Epione Solar Still is a short-term disaster relief solution.

Dr Nathan Feldman, Co-Founder of HopeGel, of EB Performance, LLC to determine significant impact of nutrients-infused water to boost immunity of victims of natural disaster. (Project Medishare, n.d)

Review the mechanism and efficiency of using a solar still to source clean and nutrient-rich water for Filipinos.

3. Project Proposal

Investment into purification of water contamination in the form of disaster relief, which can provide Filipinos with nutrients to boost their immunity in times of disaster and limit the number of deaths that occur due to the consumption of contaminated water during a crisis.

3.1 Overview of Project

Our group proposes to build a solar distillation plant (Figure _) within a safe semi-underground bunker. The bunker will contain a generator to power certain parts of the plant. Then, seawater will be fed into the still via underground pipes from the sea surrounding the southern part of Tacloban. The purified water produced by the distillation process will be infused with nutrients to boost the immunity of disaster victims once consumed. Hence, not only will our distillation plant be able to produce potable water, it will also be nutritious so as to boost victims’ immunity in times of natural calamities. Potable water will then be distributed in drums and shared among Filipinos using .

Figure _: Mechanism of our solar distillation plant, Epione Solar Still

3.2 Phase 1: Water Purification System

3.2.1 Water extraction from the sea

Still is located near the sea where seawater is abundant. Seawater is extracted from low-flow open sea (Figure _) and then pumped into our solar still.

Figure _: Intake structure of seawater (Seven Seas Water Corporation, n.d.)

3.2.2 Purification of Seawater

Solar energy heats up the water in the solar still. The water evaporates, and condenses on the cooler glass surface of the ceiling of the still. Pure droplets of water slide down the glass and into the collecting basin, where nutrients will diffuse into the water.

Figure 6: Mechanism of Epione Solar Still

3.3 Phase 2: Nutrient Infuser

Using the concept of reverse osmosis (Figure _), a semi permeable membrane separates the nutrients and newly purified water, allowing the vitamins and minerals to diffuse into the condensed water. The nutrient-infused water will be able to provide nourishment, thus making the victims of natural disaster less vulnerable and susceptible to illnesses and diseases due to a stronger immune system. This will help the Filipinos in Tacloban, Leyte quickly get back on their feet after a natural disaster and minimise the death toll as much as possible after a natural disaster befalls.

Figure _: How does reverse osmosis work (Water Filter System Guide, n.d.)

Nutrient / Mineral

Function

Upper Tolerable Limit (The highest amount that can be consumed without health risks)

Vitamin A

Helps to form and maintain healthy teeth, bones, soft tissue, mucus membranes and skin.

10,000 IU/day

Vitamin B3 (Niacin)

Helps maintain healthy skin and nerves

Has cholesterol-lowering effects

35 mg/day

Vitamin C

(Ascorbic acid, an antioxidant)

Promotes healthy teeth and gums.

Helps the body absorb iron and maintain healthy tissue.

Promotes wound healing.

2,000 mg/day

Vitamin D

(Also known as “sunshine vitamin”, made by the body after being in the sun).

Helps body absorb calcium.

Helps maintain proper blood levels of calcium and phosphorus

1,000 micrograms/day (4,000 IU)

Vitamin E

(Also known as tocopherol, an antioxidant)

Plays a role in formation of red blood cells.

1,500 IU/day

Figure _: Table of functions and amount of nutrients that will be diffused into our Epione water. (WebMD, LLC, 2016)

3.4 Phase 3: Distribution of water to households in Tacloban, Leyte

Potable water will be collected into drums (Figure _) of 100 litres in capacity each, which would suffice 50 people since the average intake of water is 2 litres per person per day. These drums will then be distributed to the tent cities in Tacloban, Leyte, our targeted area, should a natural disaster befall. Thus, locals will get potable water within their reach, which is extremely crucial for their survival in times of natural calamities.

Figure _: Rain barrels will be used to store the purified and nutrient-infused water (Your Easy Garden, n.d.)

3.5 Stakeholders

3.5.1 The HopeGel Project

HopeGel is a nutrient and calorie-dense protein gel designed to aid children suffering from malnutrition caused by severe food insecurity brought upon by draughts (Glenroy Inc., 2014). HopeGel has been distributed in Haiti where malnutrition is the number one cause of death among children under five mainly due to the high frequency of natural disasters that has caused much destruction to the now impoverished state of Haiti. (Figure _) The implementation of Epione Solar Still by this company helps it achieve its objective to address the global issue of severe acute malnutrition in children as most victims of natural disasters lack the nourishment they need (HopeGel, n.d.)

Figure _: HopeGel, a packaged nutrient and calorie-dense protein gel (Butschli, HopeGel, n.d.)

3.5.2 Action Against Hunger (AAH)

Action Against Hunger is a relief organisation that develops and carries out programme for countries in need regarding nutrition, health, water and food security (Action Against Hunger, n.d) (Figure _). AAH also provides programs to be better prepared for disasters which aims to anticipate and prevent humanitarian crisis (GlobalCorps, n.d.) With 40 years of expertise, helping 14.9 million people across more than 45 countries, AAH is no stranger to humanitarian crises. The implementation of Epione Solar Still by this company helps it achieve its aim of saving lives by extending help to Fillipinos in Tacloban, Leyte suffering from deprivation of a basic need due to water contamination caused by disaster relief through purifying and infusing nutrients into seawater.

Figure _: Aims and Missions of Action Against Hunger (AACH, n.d.)

2017-7-11-1499736147

Analyse the use of ICTS in a humanitarian emergency

INTRODUCTION

The intention of writing this essay is to analyse the use of ICTS in a humanitarian emergency. The specific case study we have discuss in this essay is Multi-level functionality of social media in the aftermath of the Great East Japan Earthquake written by Jung, J., and Moro, M. 2014. This report emphasis on the benefits of social media networks like twitter and face book can be used to spread and gather important information in emergency situations rather than solely utilised as a social network platform. ICTs has changed the way humans gather information during the disasters and social media specially twitter became important source of information in these disasters.

Literature Review

The case studies of using ICTs in a humanitarian emergency can have either technically rational perspective or socially embedded perspective. Technically rational perspective means what to do and how to achieve the given purpose, it is a prescription for design and action. Socially embedded means it focuses on the particular case and process of work is affected by the culture, area and human nature. In this article, we have examined different humanitarian disasters cases in which ICTS played a vital role to see if author consider technically rational perspective or socially embedded perspective.

In the article “Learning from crisis: Lessons in human and information infrastructure from the World Trade Centre response” by (Dawes, Cresswell et al. 2004) author adopts technical/rational perspective. 9/11 was very big incident and no one was ready to deal with this size of attack but as soon as it happened procedure start changing rapidly. Government, NGO and disaster response unit start learning and made new prescription, which can be used universally and in any size of disaster. For example, the main communication structure was damaged which was supplied by Verizon there were different communication suppliers suppling their services but they all were using the physical infrastructure supplied by Verizon. So VOIP was used for communication between government officials and in EOC building. There were three main areas where the problems were found and then new procedure adopt in the response of disaster. The three main areas were technology, information and inter layered relationships between the Ngo’s, Government and the private sector. (Dawes, Cresswell et al. 2004).

In the article “Challenges in humanitarian information management and exchange: Evidence from Haiti,” (Altay, Labonte 2014) author adopts socially embedded perspective. Haiti earthquake was one of the big disaster killing 500000 people and displacing at least 2 million. Around 2000 organisation went in for help but there was no coordination between NGO`s and government for the humanitarian response. Organisation didn’t consider local knowledge they assumed that there is no data available. All the organisations had different standards and ways to do work so no one followed any prescription. Technical aspect of HIME (humanitarian information management and exchange) wasn’t working because all the members of humanitarian relief work wasn’t sharing any humanitarian information. (Altay, Labonte 2014)

In the article, Information systems innovation in the humanitarian sector,” Information Technologies and International Development” (Tusiime, Byrne 2011) author adopts socially embedded perspective. Local staff was hired. They didn’t have any former experience or knowledge to work with such a technology, which slow down the process of implementing new technology. Staff wanted to learn and use new system but the changes were done on such a high pace that made staff overworked and stress, which made them loose the interest in the innovation. The management decided to use COMPAS as a new system without realizing that it’s not completing functional and it still have lots of issues but they still went ahead with it. When staff start using and found the problems and not enough technical support was supplied then they didn’t have any choice and they went back to old way of doing things (Tusiime, Byrne 2011). The whole process was effected by how the work is done in specific area and people behaviours.

In the article “Multi-level functionality of social media in the aftermath of the Great East Japan Earthquake,” (Jung, Moro 2014) author adopts technically rational perspective. In any future humanitarian disaster situation, social media can be used as an effective source of communication method conjunction with mass media. After the disaster twitter was used more as a spreading and gathering information source instead of using as social media platform.

In the article “Information flow impediments in disaster relief supply chains,” Journal of the Association for Information Systems,10(8), pp. 637-660.(Day, Junglas et al. 2009) author proposed development of IS for information sharing based on hurricane Katrina. Author adopted TR perspective because need of IS development for information flow within and outside of organisation is essential. This developed IS will help to manage complex supply chain management. Supply chain management in disaster situation is challenging as compare to traditional supply chain management. Supply chain management IS should be able to cater all types of dynamic information, suggested Day, Juglas and Silva (2009).

Case study Description:

On the 11 march 2011 at the scale of 9.00 magnitude hit north-eastern part of japan. This was followed by tsunami. Thousands of people lost their lives and infrastructure was completely damaged in that area (Jung, Moro 2014). Tsunami wiped off two towns of the maps and the costal maps had to be redrawn (Acar, Muraki 2011). On the same day of earth quake cooling system in nuclear reactor no 1 in Fukushima failed because of that nuclear accident Japanese government issued nuclear emergency. On the evening of the earthquake Japanese government issued evacuation order for 3 km area around reactor (Jung, Moro 2014). On March 12 hydrogen explosion occurred in the nuclear reactor because of failed cooling system which is followed by another explosion after 2 days on March 14. The area of evacuation was 3 km in the start but was increased to 20 km so avoid any nuclear radiation. This was one of the big nuclear disaster for the country so it was hard for the government to access the scale of the disaster. As the government officials, didn’t came across this kind situation before and couldn’t estimate the damage occurred because of incident. Government officials were adding more confusion in people with their unreliable information. They declare the accident level as 5 on the international nuclear scale but later they changed it to 7 which was highest on international nuclear scale. Media reporting was also confusing the public. The combination of contradicting information from government and media increase the level of confusion in the public. In the case of disaster Mass media is always the main source of information normally they discontinue their normal transmission and focus on the disaster. Their most of the airtime is devoted for the disaster so they can keep the people update about the situation. Normally mass media provides very reliable information in humanitarian disaster situation but in the case of japan disaster media was contradicting each other news e.g. international media was contradicting the news from local media as well as local government so people start losing faith in the mass media and start relying on different source to get information. Second reason was that the mass media was traditional way of gathering information and because of changes in technology people start using mobile phone and internet. Third main reason people start looking to get the information from different mean because the infrastructure for mass media was damage and lot of people cannot access the services of Television, so they start depending on video streaming sites e.g. ustream and YouTube. People start using twitter on big scale to spread and gather news. There was 30 percent of users increased on twitter within first week of disaster and 60 percent of twitter user thinks that it was useful for gather or spread information.

Case Study Analysis:

Twitter is one of the social media platform and micro blogging website, you can have 140 character in one tweet. It is different from other social media plate form because any one can follow you and they don’t need your authorization. Only register member can tweet but to read a message registration is not required. The author of “Multi-level functionality of social media in the aftermath of the Great East Japan Earthquake,” (Jung, Moro 2014) discuss about the five functionalities of twitter by the help of conceptual model of multi-level social media. The following figure describes the five primary function model in twitter very clearly.

Fig No 1 Source: (Jung, Moro 2014)

The five functionality was derived on survey and review of selected twitter timelines.

The first function was having tweets between individual it’s also known as interpersonal communication with others. It is micro level of conceptual model, in this level people from country and outside of a country were connecting other people who were is the affected area. The most of tweets were for checking safety of people that they are safe after the disaster, to inform your love ones if you were at affected area and needs any help or to inform people that you are safe. In the first three days high percentage of tweets were from micro level communication channel.

The second function was having communication channel for local organisation, local government and local media. It is meso level of conceptual model in this communication channel local governments open new accounts and re activate accounts which wasn’t used for a while to keep their local residents informed, the follower of twitter accounts increased very fast. People have understand the importance of social media and benefits of it after the disaster when the infrastructure was damaged and they were having electricity cut out but they were still able to get the information about the disasters and tsunami warnings. Local government and local media used twitter accounts to give different alerts and news e.g. the alert of tsunami was issued on twitter and after tsunami the reports of damage was released on twitter. Local media open new twitters channels and kept people informed about situation. Different organisation e.g. embassies of different countries used twitter to keep their nationals informed about situation about disaster and this was best way of communication between embassies and their nationals. Nationals can even let their embassy that they are struck in affected area and they need any help because they can be in very vulnerable situation as they are not in their country.

The third function was having communication of Mass media which is known as Macro level. Mass media used social platform to broadcast their news because the infrastructure was damage and people in effected area couldn’t access their broadcast. There were some people who were not in the country so they couldn’t access the local mass media news on television so they watching news on video streaming website as the demand increased most of mass media open the accounts on social media to fulfil the requirements. They start broadcasting their news on video streaming websites like YouTube, Ustream. Mass media was giving news updates several times a day on twitter as well and lot of people who were reading it also was retweeting them so information was spreading on very high speed.

The fourth function was information sharing and gathering which is known as cross level. Individual used social media to get the information about earthquake, tsunami and nuclear accident. When someone try to find information they come across the tweets which were for micro level, meso level and macro level. This level is great use when you are looking for help and you want to know different people opinions if they were in that situation what would they have done. The research done on the twitter time line proofs that on the day of earthquake people were tweeting regarding the shelters available and information about transport (Jung, Moro 2014).

The fifth function was direct channels between individuals and the mass media, government and the public. This is also consider as cross level. In this level individual could inform government and mass media about the situation of effected area because of disaster there were some places where government and mass media couldn’t reach, so they didn’t know the situation. Mayor of Minami-soma city which was 25 miles away from Fukushima used you tube to tell the government the threat of radiation to his city, the video went viral and Japanese government have international pressure to evacuate the city. (Jung, Moro 2014)

Reflection:

There was gradually change in use of social media to use a communication tool instead of social media platform in event of disaster. The multi-level functionality is one of the important characteristic which connects it very well with existing media. This is complete prescription which can be used in and after any kind of disaster. Social media can be used with other media as an effective communication methods to prepare for emergency in any future disaster situation.

Twitter played a big role in the communication in the disaster in japan. It was used to spread information, gather information about earthquake, tsunami and nuclear reactor accident. It was used to help request, issue warning about earthquake, tsunami and nuclear reactor accident. It was also used for condolences. Twitter has lot of benefits but it has some drawbacks which has to be rectify. The biggest issue in tweets are unreliability, anyone can tweet any information and there is no check and balance on it, only the person who do that tweet is responsible for the authentic information. There is no control on false information and it spreads so fast that it can create anxiety in people because of contradicted information e.g. if the false information about the range of radiation was released by individual and retweets by other individual who didn’t had any knowledge about the effect of radiation and nuclear accident it would had caused a panic in people. In the case of disaster, it is very important that reliable and right information is released.

Information system can play vital role in humanitarian disasters in all aspects. It can be used in the better communication, it can be used to improve the efficiency and accountability of the organisation. The data will be available widely in the organisation so it can have monitoring on the finances. It helps to coordinate different operation in organisations e.g. transport, supply chain management, logistics, finance and monitoring.

Social media has played a significant role in communicating, disseminating and storing data related to disasters. There is a need of control of that information being spread over the social media since not all type of information is authentic or verified.

IS based tools needs to be developed for disaster management in order to get best result from varied range of data extracted from social media and take necessary action for the wellbeing of people in disaster area.

The outcome of using purpose built IS, will be supportive in making decisions to develop strategy to deal with the situation. Disaster management team will be able to analyse the data in order to train the team for a disaster situation.

2017-1-12-1484253744

Renewable energy in the UK: essay help

The 2014 IPCC report stated that anthropogenic emissions of greenhouse gases have led to unprecedented levels of carbon dioxide, methane and nitrous oxide in the environment. The report also stated that the effect of greenhouse gases is extremely likely to have caused the global warming we have witnessed since the 20th century.

The 2018 IPCC report set new targets, aiming to limit climate change to a maximum of 1.5°C. To reach this, we will need zero CO₂ emissions by the year 2050. Previous IPCC targets of 2°C change allowed us until roughly 2070 to reach zero emissions. This means government policies will have to be reassessed and current progress reviewed in order to confirm whether or not the UK is capable of reaching zero emissions by 2050 on our current plan.

Electricity Generation

Fossil fuels are natural fuels formed from the remains of prehistoric plant and animal life. Fossil fuels (coal, oil and gas) are crucial in any look at climate change as when burned they release both carbon dioxide (a greenhouse gas) and energy. Hence, in order to reach the IPCC targets the UK needs to drastically reduce its usage of fossil fuels, either through improving efficiency or by using other methods of energy generation.

Whilst coal is a cheap energy source used to generate approximately 40% of the world’s electricity , it’s arguably the most damaging to the environment as coal releases more energy into the atmosphere in relation to energy produced than any other fuel source. Coal power stations generate electricity by burning coal in a combustion chamber and using the heat energy to transform water to steam which turns the propeller-like blades within the turbine. A generator (consisting of tightly-wound metal coils) is mounted at one end of the turbine and when rotated at a high velocity through a magnetic field, generates electricity. However the UK has made great claims to fully eradiate the use of coal in electricity generation by 2025. These claims are well substantiated by the UK’s rapid decline in coal use. In 2015 coal accounted for 22% of electricity generated in the UK, this was down to only 2% by the second quarter of 2017 and in April 2018 the UK even managed to go 72 hours powered without coal.

Natural gas became a staple of British electrical generation in the 1990s, when the Conservative Party got into power and privatised the electrical supply industry. The “Dash for gas” was triggered by legal changes within the UK and EU allowing for greater freedom to use gas in electricity generation.

Whilst natural gas emits less CO₂ than coal, it emits far more methane. Methane doesn’t remain in the atmosphere as long but it traps heat to a far greater extent. According to the World Energy Council methane emissions trap 25 times more heat than CO₂ over a 100 year timeframe.

Natural gas produces electrical energy in a gas turbine. Natural gas is mixed with the hot air and burned in a combustor. The hot gas then pushes turbine blades and as in coal plant, the turbine is attached to a generator, creating electricity. Gas turbines are hugely popular as they are a cheap source of energy generation and they can quickly be powered up to respond to surges in electrical demand.

Combined Cycle Gas Turbines (CCGT) are an even better source of electrical generation. Whilst traditional gas turbines are cheap and fast-reacting, they only have an efficiency of approximately 30%. Combined cycle turbines, however, are gas turbines used in combination with steam turbines giving an efficiency of between 50 and 60%. The hot exhaust from the gas turbine is used to create steam which rotates turbine blades and a generator in a steam turbine. This allows for greater thermal efficiency.

Nuclear energy is a potential way forward as no CO₂ is emitted by Nuclear power plants. Nuclear plants aim to capture the energy released by atoms undergoing nuclear fission. In nuclear fission, nuclei absorb neutrons as they collide thus making an unstable nucleus. The unstable nucleus will then split into fission products of smaller mass and emit two or three high speed neutrons which can then collide with more nuclei, making them unstable thus creating a chain reaction. The heat energy produced by splitting the atom is first converted can be used to produce steam which will be used by a turbine generator to produce electricity.

Currently, 21% of electricity generated in the UK comes from nuclear energy. In the 1990s, 25% of electricity came from nuclear energy but gradually old plants have been retired. By 2025, UK nuclear power could half. This is due to a multitude of reasons. Firstly, nuclear fuel is expensive in comparison to gas and coal. Secondly, nuclear waste is extremely radioactive and so must be dealt with properly. Also, in light of tragedies such as Chernobyl and Fukushima, much of the British public expressed concerns surrounding Nuclear energy with the Scottish government refusing to open more plants

In order to lower our CO₂ emissions it is crucial we also utilise renewable energy. The UK currently gets very little of its energy from renewable sources but almost all future plans place a huge emphasis on renewables.

The UK has great wind energy potential as the nation is the windiest country in the EU with 40% of the total wind that blows across the EU.

Wind turbines are straightforward machinery; the wind turns the turbine blades around a rotor which is connected to the main shaft which spins a generator, creating electricity. In 2017, onshore wind generated enough energy to power 7.25 million homes a year and generated 9% of the UK’s electricity. However, despite the clear benefits of clean, renewable energy, wind energy is not without its problems. Firstly, it is an intermittent supply – the turbine will not generate energy when there is no wind. Also it has been opposed by members of the public for affecting the look of the countryside and bird fatalities. These problems are magnified by the current conservative government’s stance on wind energy who wish to limit onshore wind farm development despite public opposition to this “ban”.

Heating and Transport

Currently it is estimated a third of carbon dioxide (CO2) emissions in the UK are accounted for in the heating sector. 50% of all heat emissions in the UK exist for domestic use, consequently making it the main source of CO2 emissions in the heating sector. Around 98% of domestic heating is used for space and water heating. The government has sought to reduce the emissions from domestic heating alone by issuing a series of regulations on new boilers. Regulations state as of 1st April 2005 all new installations and replacements of boilers are required to be condensing boilers. As well as CO2 emissions being much lower, condensing boilers are around 15-30% more efficient than older gas boilers. Reducing heat demand has also been an approach taken to reduce emissions. For instance, building standards in the UK have set higher levels of required thermal insulations of both domestic and non-domestic buildings when refurbishing and carrying out new projects. These policies are key to ensure that both homes are buildings in industry are as efficient as possible when it comes to conserving heat.

Although progress is being made in terms of improving current CO2 reducing systems, the potential for significant CO2 reductions rely upon low carbon technologies. Highly efficient technologies such as the residential heat pump and biomass boilers have the potential to be carbon neutral sources of heat and in doing so could massively reduce CO2 emissions for domestic use . However, finding the best route to a decarbonised future in the heating industry relies upon more than just which technology has the lowest carbon footprint. For instance, intermittent technologies such as solar thermal collectors cannot provide a sufficient level of heat in the winter and require a back-up source of heat making them a less desirable source of heat . Cost is also a major factor in consumer preference. For most consumers, a boiler is the cheapest option for heating. This provides a problem for low carbon technologies which tend to have significantly higher upfront costs . In response to the cost associated with these technologies, the government has introduced policies such as the ‘Renewable Heat Incentive’ which aims to alleviate the expense through paying consumers for each unit of heat produced by low carbon technologies. Around 30% of the heating sector is allocated for industry use, making it the second largest cause of CO2 in this sector . Currently, combined heat and power (CHP) is the main process used to make industrial heat use more efficient and has shown CO2 reductions of up to 30%. Although this is a substantial reduction in CO2, alternative technology has the potential to deliver even higher reductions. For example, the process of carbon capture storage (CCS), has the potential to reduce CO2 emissions by up to 90% . However, CCS is a complex procedure which would require a substantial amount of funding and as a result is not currently implemented for industrial use in the UK.

Although heating is a significant contribution to CO2 emissions in the UK, there is also much needed progress elsewhere. In 2017 it was estimated that 34% of all carbon dioxide (CO2) emissions in the UK were caused by transport and is widely thought to be the sector in which least progress is being made, with only seeing a 2% reduction in CO2 emissions since 1990. Road transport contributes the highest proportion of emissions, more specifically petrol and diesel cars. Despite average CO2 emissions of new vehicles declining, the carbon footprint of the transport industry continues to increase due to the larger number of vehicles in the UK.

In terms of progress, CO2 emissions of new cars in 2017 were estimated to be 33.1% lower than the early 2000s. Although efficiencies are improving, more must be done if we are to conform to the targets set from the Climate Change Act 2008. A combination of decarbonising transport and implementing government legislation is vital to have the potential to meet these demands. New technology such as battery electric vehicles (BEV’s) have the potential to create significant reductions in the transport industry. As a result, a report from the committee of climate change suggests that 60% of all sales of new cars and vans should be ultra-low emission by 2030. However, the likeliness of achieving this is hindered by the constraints of new technologies. For instance, low emission vehicles are likely to have significantly higher costs and lack consumer awareness. This reinforces the need of government support in projecting new technologies and cleaner fuels. To support the development and uptake of low carbon vehicles the government has committed £32 million for the funding of charging infrastructure of BEV’s from 2015-2020 and a further £140 million has been allocated to the ‘low carbon vehicle innovation platform’ which strives to advance the development and research of low emission vehicles. Progress has also been made to make these vehicles more cost competitive through being exempt from taxes such as Vehicle Excise Duty and providing incentives such as plug in grants of up to £3,500. Aside from passenger cars, improvements are also being made to emissions of public transport. The average low emission bus in London could reduce its CO2 emissions by up to 26 tonnes per year subsequently acquiring the governments support in England of the ‘Green Bus Fund’.

Conclusion

In 2017, renewables accounted for a record 29.3% of the UK’s energy generation. This is a vast improvement on previous years and suggests the UK is on track to meet the new IPCC targets although a lot of work still needs to be done. Government policies do need to be reassessed in light of the new targets however. Scotland should reassess its nuclear policy as this might be a necessary stepping stone in reduced emissions until renewables are able to fully power the nation and the UK government needs to reassess its allocation of funding as investment in clean energy is on a current downward trajectory.

Although progress has been made to reduce CO2 emissions in the heat and transport sector, emissions throughout the UK remain much higher than desired. The committee of climate change report to parliament (2015), calls for the widespread electrification of heating and transport by 2030 to help prevent a 1.5 degree rise in global temperature. This is likely to pose as a major challenge and will require a significant increase in electricity generation capacities in conjunction with greater policy intervention to encourage the uptake of low carbon technologies. Although the likelihood of all consumers switching to alternative technologies are sparse, if the government continues to tighten regulations surrounding fossil fuelled technologies whilst the heat and transport industry continue to develop old and new systems to become more efficient this should see significant CO2 reductions in the future.

2018-11-19-1542623986

Is Nuclear Power a viable source of energy?: college application essay help

6th Form Economics project:

Nuclear power, the energy of the future of the 1950s, is now starting to feel like the past. Around 450 nuclear reactors worldwide currently generate 11% of the world electricity, or approximately 2500 TWh in a year, just under the total nuclear power generated globally in 2001 and only 500 TWh more than in 1991. The number of operating reactors worldwide has seen the same stagnation, with an increase of only 31 since 1989, or an annual growth of only 0.23% compared to 12.9% from 1959 to 1989. Most reactors, especially in Europe and North America, where built before the 90s and the average age of reactors worldwide is just over 28 years. Large scale nuclear accidents such as Chernobyl in 1986 or, much more recently, Fukushima in 2011 have negatively impacted public support for nuclear power and helped cause this decline, but the weight of evidence has increasingly suggested that nuclear is safer than most other energy sources and has an incredibly low carbon footprint, causing the argument against nuclear to shift from concerns about safety and the environment to questions about the economic viability of nuclear power. The crucial question that remains is therefore about how well nuclear power can compete against renewables to produce the low carbon energy required to tackle global warming.

The costs of most renewable energy sources have been falling rapidly and increasingly able to outcompete nuclear power as a low carbon option and even fossil fuels in some places; photovoltaic panels, for example, have halved in price from 2008 to 2014. Worse still for nuclear power, it seems that while costs of renewable energy have been falling, plans for new nuclear plants have been plagued with delays and additional costs: in the UK, Hinkley Point C power station is set to cost £20.3bn, making it the world’s most expensive power station, and significant issues in the design have raised questions as to whether the plant will be completed by 2025, it’s current goal. In France, the Flamanville 3 reactor is now predicted to cost three times its original budget and several delays have pushed the start up date, originally set for 2012, to 2020. The story is the same in the US, where delays and extra costs have plagued the construction of the Vogtle 3 and 4 reactors which are now due to be complete by 2020-21, 4 years over their original target. Nuclear power seemingly cannot deliver the cheap, carbon free energy it promised and is being outperformed by renewable energy sources such as solar and wind.

The crucial and recurring issue with nuclear power is that it requires huge upfront costs, especially when plants are built individually, and can only provide revenue years after the start of construction. This means that investment into nuclear is risky, long term and cannot be done well on a small scale, though new technologies such as SMRs (Small Modular Reactors) may change this in the coming decades, making it a much bigger gamble. Improvements in other technologies over the period of time a nuclear plant is built means that is often better for private firms, who are less likely to be able to afford large scale programs enabling significant cost reductions or a lower debt to equity ration in their capital structure, to invest in more easily scalable and shorter term energy sources, especially with subsidies favouring renewables in many developed countries. All of this points to the fundamental flaw of nuclear: that it requires going all the way. Small scale nuclear programs that are funded mostly with debt, that have high discount rates and low capacity factors as they are switched off frequently will invariably have a very high Levelised Cost of Energy (LCOE) as nuclear is so capital intensive.

That said, the reverse is true as well. Nuclear plants have very low operating costs, almost no external costs and the cost of decommissioning a plant are only a small portion of the initial capital cost, even with a low discount rate such as 3%, due to the long lifespan of a nuclear plant and the fact that many can be extended. Operating costs include fuel costs, which are extremely low for nuclear, costing only 0.0049 USD per kWh, and non-fuel operation and maintenance costs which are barely higher at 0.0137 USD per kWh. This includes waste disposal, a frequently cited political issue that has no longer been relevant technically for decades as waste can be reused relatively well and stored on site safely at very low costs simply because the quantity of fuel used and therefore waste produced is so small. The fuel, uranium is abundant and technology enabling uranium to be extracted from sea water would give access to a 60,000 year supply at present rates of consumption so costs from ‘resource depletion’ are also small. Finally, external costs represent a very small proportion of running costs: the highest estimates for health costs and potential accident are at 5€/MWh and 4€/MWh respectively, though some estimates fall to only 0.3€/MWh for potential accidents when past records are adjusted to try and factor in improvements in safety standards; though these vary significantly due to the fact that the total number of reactors is very small.

Nuclear power therefore remains still one of the cheapest ways to produce electricity in the right circumstances and many LCOE (Levelised Cost of Energy) estimates, which are designed to factor in all costs over the life time of a unit to give a more accurate representation of the costs of different types of energy, though they usually omit system costs, point to nuclear as a cheaper energy source than almost all renewables and most fossil fuels at low discount rates.

LCOE costs taken from ‘Projected Costs of Generating Electricity 2015 Edition’ and system costs taken from ‘Nuclear Energy and Renewables (NEA, 2012)’ have been combined by the World Nuclear association to give LCOE for four countries to compare the costs of nuclear to other energy sources. A discount rate of 7% is used, the study applies a $30/t CO2 price on fossil fuel use and uses 2013 US$ values and exchange rates. It is important to bear in mind that LCOE estimates vary widely as many assume different circumstances and they are very difficult to calculate, but it is clear from the graph that nuclear power is more than still viable; being the cheapest source in three of the four countries and third cheapest in the fourth behind onshore wind and gas.

2019-5-13-1557759917

Decision making during the Fukushima disaster

Introduction

On March 11, 2011 a tsunami struck the east coast of Japan, which resulted in a disaster at the Fukushima Daiichi nuclear power plant. During the day commencing the natural disaster many decisions were made with regards to managing the crisis. This paper will examine these decisions made during the crisis. The Governmental Politics Model, a model designed by Allison and Zelikow (1999), will be adopted to analyse the events. Therefore, the research question of this paper is: To what extent does the Governmental Politics Model explain the decisions made during the Fukushima disaster.

First, this paper will lay the theoretical basis for an analysis. The Governmental Politics Model and all crucial concepts within it are discussed. Then a conscription of the Fukushima case will follow. Since the reader is expected to already have general knowledge regarding the Fukushima Nuclear disaster the case description will be very brief. With the theoretical framework and case study a basis for the analysis is laid. The analysis will look into the decisions government and Tokyo Electric Power Company (TEPCO) officials made during the crisis.

Theory

Allison and Zelikow designed three theories to understand the outcomes of bureaucracies and decision making in the aftermath of the Cuban Missile Crisis in 1962. The first theory to be designed was the Rational Actor Model. This model focusses on the ‘logic of consequences’ and has a basic assumption of rational actions of a unitary actor. The second theory designed by Allison and Zelikow is the Organizational Behavioural Model. This model focusses on the ‘logic of appropriateness’ and has a main assumption of loosely connected allied organizations (Broekema, 2019).

The third model thought of by Allison and Zelikow is the Governmental Politics Model (GPM). This model reviews the importance of power in decision-making. According to the GPM decision making has not to do with rational/unitary actors or organizational output but everything with a bargaining game. This means that governments make decisions in other ways, according to the GPM there are four aspects to this. These aspects are: the choices of one, the results of minor games and of central games and foul-ups (Allison & Zelikow, 1999).

The following concepts are essential in the GPM. First, it is important to note that power in government is shared. Different institutions have independent bases and, therefore, power is shared. Second, persuasion is an important factor in the GPM. The power to persuade differentiates power from authority. Third, bargaining according to the process is identified, this means there is a structure in the bargaining processes. Fourth, power equals impact on outcome is mentioned in the Essence of Decision making. This means that there is a difference between what can be done and what is actually done, and what is actually done has to do with the power involved in the process. Lastly, intranational and international relations are of great importance to the GPM. These relations are intertwined and involve a vast set if international and domestic actors (Allison & Zelikow, 1999).

Not only the five previous concepts are relevant for the GPM. The GPM is inherently based on group decisions, in this type of decision making Allison and Zelikow identify seven factors. The first factor is a positive one, group decisions, when met by certain requirements create better decisions. Secondly, the agency problem is identified, this problem includes information asymmetric and the fact that actors are competing over different goals. Third, it is important to identify the actors in the ‘game’. This means that one has to find out who participates in the bargaining process. Fourth, problems with different types of decisions are outlined. Fifth, framing issues and agenda setting is an important factor in the GPM. Sixth, group decisions are not necessarily positive, they can lead to groupthink easily. This is a negative consequence and means that no other opinions are considered. Last, the difficulties in collective actions is outlined by Allison and Zelikow. This has to do with the fact that the GPM does not consider unitary actors but different organizations (Allison & Zelikow, 1999).

Besides the concepts mentioned above the GPM consists of a concise paradigm too. This paradigm is essential for the analysis of the Fukushima case. The paradigm exists of six main points. The first main point is the fact that decisions are the result of politics, this is the GPM and once again stresses the fact that decisions are the result of bargaining. Second, as said before, it is important to identify the players of the political ‘game’. Furthermore, one has to identify their preferences and goals and what kind of impact they can have on the final decision. Once this is analysed, one has to look at what the actual game is that is played. The action channels and rules of the game can be determined. Third, the ‘dominant inference pattern’ once again goes back to the fact that the decisions are the result of bargaining, but this point makes clear that differences and misunderstandings have to be taken into account. Fourth, Allison and Zelikow identify ‘general propositions’ this term includes all concepts examined in the second paragraph of the theory section of this paper. Fifth, specific propositions are looked at, these specify to decisions on the use of force and military action. Last, is the importance of evidence. When examining crisis decision making documented timelines and for example, minutes or other account are of great importance (Allison & Zelikow, 1999).

Case

In the definition of Prins and Van den Berg (2018) the Fukushima Daiichi disaster can be regarded as a safety case, this is because it was an unintentional event that caused harm to humans.

The crisis was initiated by an earthquake of 9.0 on the Richter scale which was followed by a tsunami, which waves reached a height of 10 meters. Due to the earthquake all external power lines, which are needed for cooling the fuel rods, were disconnected. Countermeasures for this issue were in place, however, the water walls were unable to protect the nuclear plant from flooding. This caused the countermeasures, the diesel generators, to be inadequate (Kushida, 2016).

Due to the lack of electricity, the nuclear fuel rods were not cooled, therefore, a ‘race for electricity’ started. Eventually the essential decision to inject sea water was made. Moreover, the situation inside the reactors was unknown. Meltdowns in reactors 1 and 2 already occurred. Because of explosions risks the decision to vent the reactors was made. However, hydrogen explosions materialized in reactors 1,2 and 4. This in turn led to the exposure of radiation to the environment. To counter the disperse of radiation the decision to inject sea water to the reactors was made (Kushida, 2016).

Analysis

This analysis will look into the decision or decisions to inject seawater in the damaged reactors. First, a timeline of the decisions will be outlined to further build on the case study above. Then the events and decisions made will be paralleled to the GPM paradigm with the six main points as described in the theory.

The need to inject sea water arose after the first stages as described in the case study passed. According to Kushida government officials and political leaders began voicing the necessity of injecting the water at 6:00 p.m., the day after the earthquake, on March 12. It would according to these officials have one very positive outcome, namely, the cooling of the reactors and the fuel pool. However, the use of sea water might have negative consequences too. It would ruin the reactors because of the salt in the sea water and it would produce vast amounts of contaminated water which would be hard to contain (Kushida, 2016). TEPCO experienced many difficulties with cooling the reactors, as is described in the case study, because of the lack of electricity. However, they were averse to injecting sea water into the reactors since this would ruin them. Still, after the first hydrogen explosion occurred in reactor one TEPCO plant workers started the injection of sea water in this specific reactor (Holt et al., 2012). A day later, on March 13, sea water injection started in reactor 3. On the 14th of March, seawater injection started in reactor 2 (Holt et al., 2012).

When looking at the decisions made by the government or TEPCO plant workers it is crucial to consider the chain of decision making by TEPCO leadership too. TEPCO leadership was in the first instance not very positive towards injecting seawater because of the earlier mentioned disadvantages, the plant would become unusable in the future and vast amounts of contaminated water would be created. Therefore, the government had to issue an order to TEPCO to start injecting seawater. They did so at 8:00 p.m. on 12 March. However, Yoshida, the Fukushima Daiichi Plant Manager already started injecting seawater at 7:00 p.m. (Kushida, 2016).

As one can already see different interests were at play and the outcome of the eventual decision can well be a political resultant. Therefore, it is crucial to examine the chain of decisions through the GPM paradigm. The first factor of this paradigm concerns decisions as a result of bargaining, this can clearly be seen in the decision to inject seawater. TEPCO leadership initially was not a proponent of this method, however, after government officials ordered them to execute the injection they had no choice. Second, according to the theory, it is important to identify the players of the ‘game’ and their goals. In this instance these divisions are easily identifiable, three different players can be pointed out. The different players are the government, TEPCO leadership and Yoshida, the plant manager. The Government has as a goal to keep their citizens safe during the crisis, TEPCO wanted to maintain the reactor as long as possible, whereas, Yoshida wanted to contain the crisis. This shows there were conflicting goals in that sense.

To further apply the GPM to the decision to inject seawater one can review the comprehensive ‘general proposition’. In this part miscommunication is a very relevant factor. Miscommunication was certainly a big issue in the decision to inject seawater. As said before Yoshida, already started injecting seawater before he received approval from his chiefs. One might even wonder whether or not there was a misunderstanding of the crisis by TEPCO leadership because of the fact that they hesitated to inject seawater necessary to cool the reactors. It can be argued that this hesitation constitutes a great deal of misunderstanding of the crisis since there was no plant to be saved anymore at the time the decision was made.

The fifth and sixth aspect of the GPM paradigm are less relevant to the decisions made. This is because ‘specific proposition’ refers to the use of force, which was not an option in dealing with the Fukushima crisis. The Japanese Self-Defence forces were dispatched to the plant; however, this was to provide electricity (Kushida, 2016). Furthermore, the sixth aspect, evidence is not as important in this case since many scholars, researchers and investigators have written to a great extent about what happened during the Fukushima crisis, more than sufficient information is available.

The political and bargaining game in the decision to inject seawater into the reactors is clearly visible. The different actors in the game had different goals, however, eventually the government won this game and the decision to inject seawater was made. Even before that the plant manager already to inject seawater because the situation was too dire.

Conclusion

This essay reviewed decision making during the Fukushima Daiichi Nuclear Power Plant disaster on the 11th of March 2011. More specifically the decision to inject seawater into the reactors to cool them was scrutinized. This was done by using the Governmental Politics Model. The decision to inject seawater into the reactors was a result of a bargaining game and different actors with different objectives played the decision-making ‘game’.

2019-3-18-1552918037

Tackling misinformation on social media: college essay help online

As the world of social media expands, the ratio of miscommunication rises as more organisations hop on the bandwagon of utilising the digital realm to their advantage. Twitter, Facebook, Instagram, online forums and other websites become the pinnacle of news gathering for many individuals. Information becomes easily accessible to all walks of life meaning that people are becoming more integrated about real life issues. Consumers absorb and take information in as easy as ever before which proves to be equally advantageous and disadvantageous. But, There is an evident boundary in which the differentiation of misleading and truthful information is hard to cross without research on the topic. The accuracy of public information is highly questionable which could easily lead to problems. Despite there being a debate about source credibility in any platform, there are ways to tackle the issue through “expertise/competence (i. e., the degree to which a perceiver believes a sender to know the truth), trustworthiness (i. e., the degree to which a perceiver believes a sender will tell the truth as he or she knows it), and goodwill”. (Cronkhite & Liska (1976)) Which is why it has become critical for this to be accurate, ethical and reliable for the consumers. Verifying information is important regardless of the type of social media outlet. This essay will be highlighting the importance of why information need to fit this criteria.

By putting out credible information it prevents and reduces misconception, convoluted meanings and inconsistent facts which reduce the likeliness of issues surfacing. This in turn saves time for the consumer and the producer. The presence of risk raises the issue of how much of this information should be consumed by the public. The perception of source credibility becomes an important concept to analyse within social media, especially in terms of crisis where rationality reduces and the latter often just take the first thing that is seen. With the increasing amount of information available through newer channels, the idea of releasing information from professionals of the topic devolve away from the producers and onto consumers. (Haas & Wearden, 2003) Many of the public is unaware that this information is prone to bias and selective information sharing which could communicate the actual facts much differently. One such example is the incident of Tokyo Electric Power Co.’s Fukushima No.1 nuclear power plant in 2011, where the plant experienced triple meltdowns. There is a misconception floating around that the food exported from Fukushima is too contaminated with radioactive substances making them unhealthy and unfit to eat. But the truth is that this isn’t the case when strict screening reveals that the contamination is below the government standard to pose a threat. (arkansa.gov.au) Since then, products shipped from Fukushima have dropped considerably in prices and have not recovered since 2011 forcing retailers into bankruptcy. (japantimes.co.jp) But thanks to the use of social media and organisations releasing information out into the public, Fukushima was able to raise funds and receive help from other countries. For example the U.S. sending $100,000 and China sending emergency supplies as assistance. (theguardian.com) This would have been impossible to achieve without the use of sharing credible, reliable and ethical information regarding the country and social media support spotlighting the incident.

Accurate, ethical and reliable information open the pathway for producers to secure a relationship with the consumers which can be used to strengthen their own businesses and expand their industries further whilst gaining support from the public. The idea is to have a healthy relationship without the air of uneasiness where monetary gains and social earnings increase. Social media playing a pivotal role in deciding the route the relationship falls in. But, When done incorrectly, organisations can become unsuccessful when they know little to nothing about the change of dynamics in consumers and behaviour in the digital landscape. Consumer informedness means that consumers are well informed about products or services available with precision influencing their willingness in decisions. This increase in consumer informedness can instigate change in consumer behaviour. (uni-osnabrueck.de) In the absence of accurate, ethical and reliable information, people and organisations will make terrible decisions with no hesitation. Which leads to losses and steps backwards. As Saul Eslake (Saul-Eslake.com) says, “they will be unable to help or persuade others to make better decisions; and no-one will be able to ascertain whether the decisions made by particular individuals or organisations were the best ones that could have been made at the time”. Recently, a YouTuber named Shawn Dawson made a video that sparked controversy to the company ‘Chuck E. Cheese’ for their pizzas slices that do not look like they belong to the whole pizza. He created a theory that part of the pizzas may have been reheated or recycled from other tables. In response Chuck E. Cheese responded in multiple media outlets to debunk the theory, “These claims are unequivocally false. We prep the dough daily for our made to order pizzas, which means they’re not always perfectly round, but they are still great tasting.” (https://twitter.com/chuckecheeses) It is worth bringing up that no information other than pictures back up the claim that they reused the pizza. The food company has also gone far to create a video showing the pizza preparation. To back as the support, ex-employees spoke up and shared their own side of the story to debunk the theory further. It’s these quick responses that saved what could have caused a small downfall in sale for the Chuck E. Cheese company. (washintonpost.com) This event highlights the importance on the release of information that can fall in favour to whoever utilises it correctly and the effectiveness of credible information that should be taken to heart. Credible information is good and bad especially when it has the support of others whether online or real life. The assumption or guess when there is no information available to base from is called a ‘heuristic value’ which is seen associated with information that has no credibility.

Mass media have been a dominant source of finding information (Murch, 1971). They are generally thought and assumed to provide credible, valuable, and ethical information open to the public (Heath, Liao, & Douglas, 1995). However, along with traditional forms of media, newer media are increasingly available for information seeking and reports. According to PNAS (www.pnas.org), “The emergence of social media as a key source of news content has created a new ecosystem for the spreading of misinformation. This is illustrated by the recent rise of an old form of misinformation: blatantly false news stories that are presented as if they are legitimate . So-called “fake news” rose to prominence as a major issue during the 2016 US presidential election and continues to draw significant attention.” This affects how we as social beings perceive and analyse information we see online compared to real life. Beyond just reducing the intervention’s effectiveness, failing to deduce stories from real to false increase the belief of false content. Leading to biased and misleading content that fool the audience. One such incident is Michael Jackson’s death in June 2009 where he died from acute propofol and benzodiazepine intoxication administered by his doctor, Dr. Murray. (nytimes.com) It was deduced from the public that Michael Jackson was murdered on purpose but the court convicted, Dr. Murray of involuntary murder as the doctor proclaimed that Jackson begged him to give more. A fact that was overlooked by the general due to bias. This underlines how information is selectively picked from the public and not all information is revealed to sway the audience. A study conducted online by Jason and his team (JCMC [CQU]) revealed that Facebook users tended to believe their friends almost instantly even without a link or proper citation to a website to backup their claim. “Using a person who has frequent social media interactions with the participant was intended to increase the external validity of the manipulation.” Meaning information online that can be taken as truth or not is left to the perception of the viewer linking to the idea that information online isn’t credible fully unless it came straight from the source. Proclaiming the importance of credible information to be released.

Information has the power to inform, explain and expand on topics and concepts. But it also has the power to create inaccuracies and confusion which is hurtful to the public and damages the reputation of companies. The goal is to move forward not backwards. Many companies have gotten themselves into disputes because of incorrect information which could have easily been avoided through releasing accurate, ethical and reliable information from the beginning. False Information can start disputes and true information can provide resolution. The public has become less attentive to mainstream news altogether which strikes a problem on what can be trusted. Companies and organisations need their information to be accurate and reliable as much as possible to defeat and reduce this issue. Increased negativity and incivility exacerbate the media’s credibility problem. “People of all political persuasions are growing more dissatisfied with the news, as levels of media trust decline.” (JCMC [CQU]) In 2010, Dannon’s ‘Activia Yogurt’ released an online statement and false advertisement that their yogurt had “special bacterial ingredients.” A consumer named, Trish Wiener lodged a complaint against Dannon. The yogurts were being marketed as being “clinically” and “scientifically” proven to boost the immune system while able to help to regulate digestion. However, the judge saw this statement as unproven. As well as many other products in their line that used this statement in their products. “This landed the company a $45 million class action settlement.” (businessinsider.com) it didn’t help that Dannon’s prices for their yogurt was inflated compared to other yogurts in the market. “The lawsuit claims Dannon has spent “far more than $100 million” to convey deceptive messages to U.S. consumers while charging 30 percent more that other yogurt products.” (reuters.com) This highlights how inaccurate information can cost millions of dollars to settle and resolve. However it also showed how the public can easily evict irresponsible producers from their actions and give leeway to justice.

2019-5-2-1556794982

Socio-political significance of Turkey’s emergent neo-Ottoman cultural phenomenon

Over the last decade, Turkey’s cultural sphere has witnessed a motto of Ottomania—a term describing the recent cultural fervor for everything Ottoman. Although this neo-Ottoman cultural phenomenon, is not entirely new since it had its previous cycle back in the 1980s and 1990s during the heyday of Turkey’s political Islam, it now has a rather novel characteristic and distinct pattern of operation. This revived Ottoman craze is discernable in what I call the neo-Ottoman cultural ensemble—referring to a growing array of Ottoman-themed cultural productions and sites that evoke Turkey’s Ottoman-Islamic cultural heritage. For example, the celebration of the 1453 Istanbul conquest no longer merely takes place as an annual public commemoration by the Islamists,[1] but has been widely promulgated, reproduced, and consumed into various forms of popular culture such as: the Panorama 1453 History Museum; a fun ride called the Conqueror’s Dream (Fatih’in Rüyası) at the Vialand theme park; the highly publicized and grossed blockbuster The Conquest 1453 (Fetih 1453); and the primetime television costume drama The Conqueror (Fatih). It is the “banal”, or “mundane,” ways of everyday practice of society itself, rather than the government or state institutions that distinguishes this emergent form of neo-Ottomanism from its earlier phases.[2]

This is the context in which the concept of neo-Ottomanism has acquired its cultural dimension and analytical currency for comprehending the proliferating neo-Ottoman cultural phenomenon. However, when the concept is employed in contemporary cultural debates, it generally follows two trajectories that are common in the literature of Turkish domestic and foreign politics. These trajectories conceptualize neo-Ottomanism as an Islamist political ideology and/or a doctrine of Turkey’s foreign policy in the post-Cold War era. This essay argues that these two conventional conceptions tend to overlook the complexity and hybridity of Turkey’s latest phase of neo-Ottomanism. As a result, they tend to understand the emergent neo-Ottoman cultural ensemble as merely a representational apparatus of the neoconservative Justice and Development Party’s (AKP; Adalet ve Kalkınma Partisi) ideology and diplomatic strategy.

This essay hence aims to reassess the analytical concept of neo-Ottomanism and the emergent neo-Ottoman cultural ensemble by undertaking three tasks. First, through a brief critique of the concept of neo-Ottomanism, I will discuss its common trajectories and limitations for comprehending the latest phase of neo-Ottoman cultural phenomenon. My second task is to propose a conceptual move from neo-Ottomanism to Ottomentality by incorporating the Foucauldian perspective of governmentality. Ottomentality is an alternative concept that I deployed here to underscore the overlapping relationship between neoliberal and neo-Ottoman rationalities in the AKP’s government of culture and diversity. I contend that neoliberalism and neo-Ottomanism are inseparable governing rationalities of the AKP and their convergence has engendered new modes of governing the cultural field as well as regulating inter-ethnic and inter-religious relations in Turkey. And finally, I will reassess the neo-Ottoman cultural ensemble through the analytical lens of Ottomentality. I contend that the convergence of neoliberal and neo-Ottoman rationalities has significantly transformed the relationships of state, culture, and the social. As the cases of the television historical drama Magnificent Century (Muhteşem Yüzyıl) and the film The Conquest 1453 (Fetih 1453) shall illustrate, the neo-Ottoman cultural ensemble plays a significant role as a governing technique that constitutes a new regime of truth based on market mentality and religious truth. It also produces a new subject of citizenry, who is responsible for enacting its right to freedom through participation in the culture market, complying with religious norms and traditional values, and maintaining a difference-blind and discriminatory model of multiculturalism.

A critique of neo-Ottomanism as an analytical concept

Although the concept of neo-Ottomanism has been commonly used in Turkish Studies, it has become a loose term referring to anything associated with the Islamist political ideology, nostalgia for the Ottoman past, and imperialist ambition of reasserting Turkey’s economic and political influence within the region and beyond. Some scholars have recently indicated that the concept of neo-Ottomanism is running out of steam as it lacks meaningful definition and explanatory power in studies of Turkish politics and foreign policy.[3] The concept’s ambiguity and impotent analytical and explanatory value is mainly due to the divergent, competing interpretations and a lack of critical evaluation within the literature.[4] Nonetheless, despite the concept being equivocally defined, it is most commonly understood in two identifiable trajectories. First, it is conceptualized as an Islamist ideology, responding to the secularist notions of modernity and nationhood and aiming to reconstruct Turkish identity by evoking Ottoman-Islamic heritage as an essential component of Turkish culture. Although neo-Ottomanism was initially formulated by a collaborated group of secular, liberal, and conservative intellectuals and political actors in the 1980s, it is closely linked to the consolidated socio-economic and political power of conservative middle-class. This trajectory considers neo-Ottomanism as primarily a form of identity politics and a result of political struggle in opposition to the republic’s founding ideology of Kemalism. Second, it is understood as an established foreign policy framework reflecting the AKP government’s renewed diplomatic strategy in the Balkans, Central Asia, and Middle East wherein Turkey plays an active role. This trajectory regards neo-Ottomanism as a political doctrine (often referring to Ahmet Davutoglu’s Strategic Depth serving as the guidebook for Turkey’s diplomatic strategy in the 21st century), which sees Turkey as a “legitimate heir of the Ottoman Empire”[5] and seeks to reaffirm Turkey’s position in the changing world order in the post-Cold War era.[6]

As a result of a lack of critical evaluation of the conventional conceptions of neo-Ottomanism, contemporary cultural analyses have largely followed the “ideology” and “foreign policy” trajectories as explanatory guidance when assessing the emergent neo-Ottoman cultural phenomenon. I contend that the neo-Ottoman cultural phenomenon is more complex than what these two trajectories offer to explain. Analyses that adopt these two approaches tend to run a few risks. First, they tend to perceiveneo-Ottomanism as a monolithic imposition upon society. They presume that this ideology, when inscribed onto domestic and foreign policies, somehow has a direct impact on how society renews its national interest and identity.[7] And they tend to understand the neo-Ottoman cultural ensemble as merely a representational device of the neo-Ottomanist ideology. For instance, Şeyda Barlas Bozkuş, in her analyses of the Miniatürk theme park and the 1453 Panorama History Museum, argues that these two sites represent the AKP’s “ideological emphasis on neo-Ottomanism” and “[create] a new class of citizens with a new relationship to Turkish-Ottoman national identity.”[8] Second, contemporary cultural debates tend to overlook the complex and hybrid nature of the latest phase of neo-Ottomanism, which rarely operates on its own, but more often relies on and converges with other political rationalities, projects, and programs. As this essay shall illustrate, when closely examined, current configuration of neo-Ottomanism is more likely to reveal internal inconsistencies as well as a combination of multiple and intersecting political forces.

Moreover, as a consequence of the two risks mentioned above, contemporary cultural debates may have overlooked some of the symptomatic clues, hence, underestimated the socio-political significance of the latest phase of neo-Ottomanism. A major symptomatic clue that is often missed in cultural debates on the subject is culture itself. Insufficient attention has been paid to the AKP’s rationale of reconceptualizing culture as an administrative matter—a matter that concerns how culture is to be perceived and managed, by what culture the social should be governed, and how individuals might govern themselves with culture. At the core of the AKP government’s politics of culture and neoliberal reform of the cultural filed is the question of the social.[9] Its reform policies, projects, and programs are a means of constituting a social reality and directing social actions. When culture is aligned with neoliberal governing rationality, it redefines a new administrative culture and new rules and responsibilities of citizens in cultural practices. Culture has become not only a means to advance Turkey in global competition,[10] but also a technology of managing the diversifying culture resulted in the process of globalization. As Brian Silverstein notes, “[culture] is among other things and increasingly to be seen as a major target of administration and government in a liberalizing polity, and less a phenomenon in its ownright.”[11] While many studies acknowledge the AKP government’s neoliberal reform of the cultural field, they tend to regard neo-Ottomanism as primarily an Islamist political agenda operating outside of the neoliberal reform. It is my conviction that neoliberalism and neo-Ottomanism are inseparable political processes and rationalities, which have merged and engendered new modalities of governing every aspect of cultural life in society, including minority cultural rights, freedom of expression, individuals’ lifestyle, and so on. Hence, by overlooking the “centrality of culture”[12] in relation to the question of the social, contemporary cultural debates tend to oversimplify the emergent neo-Ottoman cultural ensemble as nothing more than an ideological machinery of the neoconservative elite.

From neo-Ottomanism to Ottomentality

In order to more adequately assess the socio-political significance of Turkey’s emergent neo-Ottoman cultural phenomenon, I propose a conceptual shift from neo-Ottomanism to Ottomentality. This shift involves not only rethinking neo-Ottomanism as a form of governmentality, but also thinking neoliberal and neo-Ottoman rationalities in collaborative terms. Neo-Ottomanism is understood here as Turkey’s current form of neoconservatism, a prevalent political rationality that its governmental practices are not solely based on Islamic values, but also draws from and produces a new political culture that considers Ottoman-Islamic toleration and pluralism as the foundation of modern liberal multiculturalism in Turkey. Neoliberalism, in the same vein, far from a totalizing concept describing an established set of political ideology or economic policy, is conceived here as a historically and locally specific form of governmentality that must be analyzed by taking into account the multiple political forces which gave its unique shape in Turkey.[13] My claim is that when these two rationalities merge at the cultural domain, they engender a new art of government, which I call the government of culture and diversity.

This approach is therefore less concerned with a particular political ideology or the question of “how to govern,” but more about the “different styles of thought, their conditions of formation, the principles and knowledges that they borrow from and generate, the practices they consist of, how they are carried out, their contestations and alliances with other arts of governing.”[14] In light of this view, and for a practical purpose, Ottomentality is an alternative concept that I attempt to develop here to avoid the ambiguous meanings and analytical limitations of neo-Ottomanism. This concept underscores to the convergence of neoliberal and neo-Ottoman rationalities as well as the interrelated discourses, projects, policies, and strategies that are developed around them for regulating cultural activities and directing inter-ethnic and inter-religious relations in Turkey. It pays attention to the techniques and practices that have significant effects on the relationships of state, culture, and the social. It is concerned with the production of knowledge, or truth, based on which a new social reality of ‘freedom,’ ‘tolerance,’ and ‘multiculturalism’ in Turkey is constituted. Furthermore, it helps to identify the type of political subject, whose demand for cultural rights and participatory democracy is reduced to market terms and a narrow understanding of multiculturalism. And their criticism of this new social reality is increasingly subjected to judicial exclusion and discipline.

I shall note that Ottomentality is an authoritarian type of governmentality—a specific type of illiberal rule operated within the structure of modern liberal democracy. As Mitchell Dean notes, although the literature on governmentality has focused mainly on liberal democratic rules that are practiced through the individual subjects’ active role (as citizens) and exercise of freedom, there are also “non-liberal and explicitly authoritarian types of rule that seek to operate through obedient rather than free subjects, or, at a minimum, endeavor to neutralize any opposition to authority.”[15] He suggests that a useful way to approach to this type of governmentality would be to identify the practices and rationalities which “divide” or “exclude” those who are subjected to be governed.[16] According to Foucault’s notion of “dividing practices,” “[t]he subject is either divided inside himself or divided from others. This process objectivizes him. Examples are the mad and the sane, the sick and the healthy, the criminals and the ‘good boys’.”[17] Turkey’s growing neo-Ottoman cultural ensemble can be considered as such exclusionary practices, which seek to regulate the diversifying culture by dividing the subjects into categorical, if not polarized, segments based on their cultural differences. For instance, mundane practices such as going to the museums and watching television shows may produce subject positions which divide subjects into such categories as the pious and the secular, the moral and the degenerate, and the Sunni-Muslim-Turk and the ethno-religious minorities.

Reassessing the neo-Ottoman cultural ensemble through the lens of Ottomentality

In this final section, I propose a reassessment of the emergent neo-Ottoman cultural ensemble by looking beyond the conventional conceptions of neo-Ottomanism as “ideology” and “foreign policy.” Using the analytical concept of Ottomentality, I aim to examine the state’s changing role and governing rationality in culture, the discursive processes of knowledge production for rationalizing certain practices of government, and the techniques of constituting a particular type of citizenry who acts upon themselves in accordance with the established knowledge/truth. Nonetheless, before proceeding to an analysis of the government of culture and diversity, a brief overview of the larger context in which the AKP’s Ottomentality took shape would be helpful.

Context

Since the establishment of the Turkish republic, the state has played a major role in maintaining a homogeneous national identity by suppressing public claims of ethnic and religious differences through militaristic intervention. The state’s strict control of cultural life in society, in particular its assertive secularist approach to religion and ethnic conception of Turkish citizenship, has resulted in unsettling tensions between ethno-religious groups in the 1980s and 1990s, i.e. the Kurdish question and the 1997 “soft coup.” These social tensions indicated the limits of state-led modernization and secularization projects in accommodating ethnic and pious segments of society.[18] This was also a time when Turkey began to witness the declining authority of the founding ideology of Kemalism as an effect of economic and political liberalization. When the AKP came to power in 2002, one of the most urgent political questions was thus the “the limits of what the state can—or ought for its own good—reasonably demand of citizens […] to continue to make everyone internalize an ethnic conception of Turkishness.”[19] At this political juncture, it was clear that a more inclusive socio-political framework was necessary in order to mitigate the growing tension resulted in identity claims.

Apart from domestic affairs, a few vital transnational initiatives also took part in the AKP’s formulation of neoliberal and neo-Ottoman rationalities. First, in the aftermath of the attacks in New York on September 11 (9/11) in 2001, the Middle East and Muslim communities around the world became the target ofintensified political debates. In the midst of anti-Muslim and anti-terror propaganda, Turkey felt a need to rebuild its image by aligning with the United Nations’ (UN) resolution of “The Alliance of Civilizations,” which called for cross-cultural dialogue between countries through cultural exchange programs and transnational business partnership.[20] Turkey took on the leading role in this resolution and launched extensive developmental plans that were designated to rebuild Turkey’s image as a civilization of tolerance and peaceful co-existence.[21] The Ottoman-Islamic civilization, known for its legacy of cosmopolitanism and ethno-religious toleration, hence became an ideal trademark of Turkey for the project of “alliance of civilizations.”[22]

Second, Turkey’s accelerated EU negotiation between the late 1990s and mid 2000s provided a timely opportunity for the newly elected AKP government to launch “liberal-democratic reform,”[23] which would significantly transform the way culture was to be administered. Culture, among the prioritized areas of administrative reform, was now reorganized to comply with the EU integration plan. By incorporating the EU’s aspect of culture as a way of enhancing “freedom, democracy, solidarity and respect for diversity,”[24] the AKP-led national cultural policy would shift away from the state-centered, protectionist model of the Kemalist establishment towards one that highlights “principles of mutual tolerance, cultural variety, equality and opposition to discrimination.”[25]

Finally, the selection of Istanbul as 2010 European Capital of Culture (ECoC) is particularly worth noting as this event enabled local authorities to put into practice the neoliberal and neo-Ottoman governing rationalities through extensive urbanprojects and branding techniques. By sponsoring and showcasing different European cities each year, the ECoC program aims at promoting a multicultural European identity beyond national borders.[26] The 2010 Istanbul ECoC was an important opportunity for Turkey not only to promote its EU candidacy, but also for the local governments to pursue urban developmental projects.[27] Some of the newly formed Ottoman-themed cultural sites and productions were a part of the ECoC projects for branding Istanbul as cultural hub where the East and West meet. It is in this context that the interplay between the neoliberal and neo-Ottoman rationalities can be vividly observed in the form of neo-Ottoman cultural ensemble.

Strong state, culture, and the social

Given the contextual background mentioned above, one could argue that the AKP’s neoliberal and neo-Ottoman rationalities arose as critiques of the republican state’s excessive intervention in society’s cultural life. The transnational initiatives that required Turkey to adopt a liberal democratic paradigm have therefore given way to the formulation and convergence of these two forms of governmentalities that would significantly challenge the state-centered approach to culture as a means of governing the social. However, it would be inaccurate to claim that the AKP’s prioritization of private initiatives in cultural governance has effectively decentralized or democratized the cultural domain from the state’s authoritarian intervention and narrow definition of Turkish culture. Deregulation of culture entails sophisticated legislations concerning the roles of the state and civil society in cultural governance. Hence, for instance, the law of promotion of culture, the law of media censorship, and the new national cultural policy prepared by the Ministry of Culture and Tourism explicitly indicate not only a new vision of national culture, but also the roles of the state and civil society in promoting and preserving national culture. It shall be noted that culture as a governing technology is not an invention of the AKP government. Culture has always been a major area of administrative concern throughout the history of the Turkish republic. As Murat Katoğlu illustrates, during the early republic, culture was conceptualized as part of the state-led “public service” aimed to inform and educate the citizens.[28] Arts and culture were essential means for modernizing the nation; for instance,the state-run cultural institutions, i.e. state ballet, theater, museum, radio and television, “[indicate] the type of modern life style that the government was trying to advocate.”[29] Nonetheless, the role of the state, the status of culture, and the techniques of managing it have been transformed as Turkey undergoes neoliberal reform. In addition, Aksoy suggests that what distinguishes the AKP’s neoliberal mode of cultural governance from that of the early republic modernization project is that market mentality has become the administrative norm.[30] Culture now is reconceptualized as an asset for advancing Turkey in global competition and a site for exercising individual freedom rather than a mechanism of social engineering. And Turkey’s heritage of Ottoman-Islamic civilization in particular is utilized as a nation branding technique to enhance Turkey’s economy, rather than a corrupt past to be forgotten. To achieve the aim of efficient, hence good, governance, the AKP’s cultural governance has heavily relied on privatization as a means to limit state intervention. Thus, privatization has not only transformed culture into an integral part of the free market, but also redefined the state’s role as a facilitator of the culture market, rather than the main provider of cultural service to the public.

The state’s withdrawal from cultural service and prioritization of the civil society to take on the initiatives of preserving and promoting Turkish “cultural values and traditional arts”[31] lead to an immediate effect of the declining authority of the Kemalist cultural establishment. Since many of the previously state-run cultural institutions now are managed with corporate mentality, they begin to lose their status as state-centered institutions and significance in defining and maintaining a homogeneous Turkish culture that they once did. Instead, these institutions, together with other newly formed cultural sites and productions by private initiatives, are converted into a market place or cultural commodities in competition with each other. Hence, privatization of culture leads to the following consequences: First, it weakens and hollows out the 20th century notion of modern secular nation state, which sets a clear boundary confining religion within the private sphere. Second, it gives way to the neoconservative force, who “models state authority on [religious] authority, a pastoral relation of the state to its flock, and a concern with unified rather than balanced or checked state power.”[32] Finally, it converts social issues that are resulted from political actions into market terms and a sheer matter of culture, which is now left to personal choice.[33] As a result, far from a declining state, Ottomentality has constituted a strong state. In particular, neoliberal governance of the cultural field has enabled the ruling neoconservative government to mobilize a new set of political truth and norms for directing inter-ethnic and inter-religious relations in society.

New regime of truth

Central to Foucault’s notion of governmentality is “truth games”[34]—referring to the activities of knowledge production through which particular thoughts are rendered truthful and practices of government are made reasonable.[35] What Foucault calls the “regime of truth” is not concerned about facticity, but a coherent set of practices that connect different discourses and make sense of the political rationalities marking the “division between true and false.”[36] The neo-Ottoman cultural ensemble is a compelling case through which the AKP’s investment of thought, knowledge production, and truth telling can be observed. Two cases are particularly worth mentioning here as I work through the politics of truth in the AKP’s neoliberal governance of culture and neo-Ottoman management of diversity.

Between 2011 and 2014, the Turkish television historical drama Magnificent Century (Muhteşem Yüzyıl, Muhteşem hereafter), featuring the life of the Ottoman Sultan Süleyman, who is known for his legislative establishment in the 16th century Ottoman Empire, attracted wide viewership in Turkey and abroad, especially in the Balkans and Middle East. Although the show played a significant role in generating international interests in Turkey’s tourism, culinary, Ottoman-Islamicarts and history, etc. (which are the fundamental aims of the AKP-led national cultural policy to promote Turkey through arts and culture, including media export),[37] it received harsh criticism among some Ottoman(ist) historians and warning from the RTUK (Radio and Television Supreme Council, a key institution of media censorship and regulation in Turkey). The criticism included the show’s misrepresentation of the Sultan as a hedonist and its harm to moral and traditional values of society. Oktay Saral, an AKP deputy of Istanbul at the time, petitioned to the parliament for a law to ban the show. He said, “[The] law would […] show filmmakers [media practitioners] how to conduct their work in compliance with Turkish family structure and moral values without humiliating Turkish youth and children.”[38] Recep Tayyip Erdoğan (Prime Minister then) also stated, “[those] who toy with these [traditional] values would be taught a lesson within the premises of law.”[39] After his statement, the show was removed from in-flight-channels of national flag carrier Turkish Airlines.

Another popular media production, the 2012 blockbuster The Conquest 1453 (Fetih 1453, Fetih hereafter), which was acclaimed for its success in domestic and international box offices, also generated mixed receptions among Turkish and foreign audiences. Some critics in Turkey and European Christians criticized the film for its selective interpretation of the Ottoman conquest of Constantinople and offensive portrayal of the (Byzantine) Christians. The Greek weekly To Proto Thema denounced that the film served as a “conquest propaganda by the Turks” and “[failed] to show the mass killings of Greeks and the plunder of the land by the Turks.”[40] A Turkish critic also commented that the film portrays the “extreme patriotism” in Turkey “without any hint of […] tolerance sprinkled throughout [the film].”[41] Furthermore, a German Christian association campaigned to boycott the film. Meanwhile, the AKP officials on the contrary praised the film for its genuine representation of the conquest. As Bülent Arınç (Deputy Prime Minister then) stated, “This is truly the best film ever made in the past years.”[42] He also responded to the questions regarding the film’s historical accuracy, “This is a film, not a documentary. The film in general fairly represents all the events that occurred during the conquest as the way we know it.”[43]

When Muhteşem and Fetih are examined within the larger context in which the neo-Ottoman cultural ensemble is formed, the connections between particular types of knowledge and governmental practice become apparent. First, the cases of Muhteşem and Fetih reveal the saturation of market rationality as the basis for a new model of cultural governance. When culture is administered in market terms, it becomes a commodity for sale and promotion as well as an indicator of a number of things for measuring the performance of cultural governance. When Turkey’s culture, in particular Ottoman-Islamic cultural heritage, is converted into an asset and national brand to advance the country in global competition, the reputation and capital it generates become indicators of Turkey’s economic development and progress. The overt emphasis on economic growth, according to Irving Kristol, is one of the distinctive features that differentiate the neoconservatives from their conservative predecessors. He suggests that, for the neoconservatives, economic growth is what gives “modern democracies their legitimacy and durability.”[44] In the Turkish context, the rising neoconservative power, which consisted of a group of Islamists and secular, liberal intellectuals and entrepreneurs (at least in the early years of the AKP’s rule), had consistently focused on boosting Turkey’s economy. For them, economic development seems to have become the appropriate way of making “conservative politics suitable to governing a modern democracy.”[45] Henceforth, such high profile cultural productions as Muhteşem and Fetih are of valuable assets that serve the primary aim of the AKP-led cultural policy because they contribute to the growth in the related areas of tourism and culture industry by promoting Turkey at international level. Based on market rationality, as long as culture can generate productivity and profit, the government is doing a splendid job in governance. In other words, when neoliberal and neoconservative forces converge at the cultural domain, both culture and good governance are reduced to and measured by economic growth, which has become a synonym for democracy “equated with the existence of formal rights, especially private property rights; with the market; and with voting,” rather than political autonomy.[46]

Second, the AKP officials’ applause of Fetih on the one hand and criticism of Muhteşem on the other demonstrates their assertion of the moral-religious authority of the state. As the notion of nation state sovereignty has become weakened by the processes of economic liberalization and globalization, the boundary that separates religion and state has become blurred. As a result, religion becomes “de-privatized” and surges back into the public sphere.[47] This blurred boundary between religion and state has enabled the neoconservative AKP to establish links between religious authority and state authority as well as between religious truth and political truth.[48] These links are evident in the AKP officials’ various public statements declaring the government’s moral mission of sanitizing Turkish culture in accordance with Islamic and traditional values. For instance, as Erdoğan once reacted to his secular opponent’s comment about his interference in politics with religious views, “we [AKP] will raise a generation that is conservative and democratic and embraces the values and historical principles of its nation.”[49] According to his view, despite Muhteşem’s contribution of generating growth in industries of culture and tourism, it became subjected to censorship and legal action because its content did not comply with the governing authority’s moral mission. The controversy of Muhteşem illustrates the rise of a religion-based political truth in Turkey, which sees Islam as the main reference for directing society’s moral conduct and individual lifestyle. Henceforth, by rewarding desirable actions (i.e. with sponsorship law and tax incentives)[50] and punishing undesirable ones (i.e. through censorship, media ban, and jail term for media practitioners’ misconduct), the AKP-led reform of the cultural field constitutes a new type of political culture and truth—one that is based on moral-religious views rather than rational reasoning.

Moreover, the AKP officials’ support for Fetih reveals its endeavor in a neo-Ottomanist knowledge, which regards the 1453 Ottoman conquest of Constantinople as the foundation of modern liberal multiculturalism in Turkey. This knowledge perceives Islam as the centripetal force for enhancing social cohesion by transcending differences between faith and ethnic groups. It rejects candid and critical interpretations of history and insists on a singular view of Ottoman-Islamic pluralism and a pragmatic understanding of the relationship between religion and state.[51] It does not require historical accuracy since religious truth is cast as historical and political truth. For instance, a consistent, singular narrative of the conquest can be observed in such productions and sites as the Panorama 1453 History Museum, television series Fatih, and TRT children’s program Çınar. This narrative begins with Prophet Muhammad’s prophecy, which he received from the almighty Allah, that Constantinople would be conquered by a great Ottoman soldier. When history is narrated from a religious point of view, it becomes indisputable as it would imply challenge to religious truth, hence Allah’s will. Nonetheless, the neo-Ottomanist knowledge conceives the conquest as not only an Ottoman victory in the past, but an incontestable living truth in Turkey’s present. As Nevzat Bayhan, former general manager of Culture Inc. in association with the Istanbul Metropolitan Municipality (İBB Kültür A.Ş.), stated at the opening ceremony of Istanbul’s Panorama 1453 History Museum,

The conquest [of Istanbul] is not about taking over the city… but to make the city livable… and its populace happy. Today, Istanbul continues to present to the world as a place where Armenians, Syriacs, Kurds… Muslims, Jews, and Christians peacefully live together.[52]

Bayhan’s statement illustrates the significance of the 1453 conquest in the neo-Ottomanist knowledge because it marks the foundation of a culture of tolerance, diversity, and peaceful coexistence in Turkey. While the neo-Ottomanist knowledge may conveniently serve the branding purpose in the post-9/11 and ECoC contexts, I maintain that it more significantly rationalizes the governmental practices in reshaping the cultural conduct and multicultural relations in Turkey. The knowledge also produces a political norm of indifference—one that is reluctant to recognize ethno-religious differences among populace, uncritical of the limits of Islam-based toleration and multiculturalism, and more seriously, indifferent about state-sanctioned discrimination and violence against the ethno-religious minorities.

Ottomentality and its subject

The AKP’s practices of the government of culture and diversity constitute what Foucault calls the “technologies of the self—ways in which human beings come to understand and act upon themselves within certain regimes of authority and knowledge, and by means of certain techniques directed to self-improvement.”[53] The AKP’s neoliberal and neo-Ottoman rationalities share a similar aim as they both seek to produce a new set of ethnical code of social conduct and transform Turkish society into a particular kind, which is economically liberal and culturally conservative. They deploy different means to direct the governed in certain ways as to achieve the desired outcome. According to Foucault, the neoliberal style of government is based on the premise that “individuals should conduct their lives as an enterprise [and] should become entrepreneurs of themselves.”[54] Central to this style of government is the production of freedom—referring to the practices that are employed to produce the necessary condition for the individuals to be free and take on responsibility of caring for themselves. For instance, Nikolas Rose suggests that consumption, a form of governing technology, is often deployed to provide the individuals with a variety of choice for exercising freedom and self-improvement. As such, the subject citizens are now “active,” or “consumer” citizens, who understand their relationship with the others and conduct their life based on market mentality.[55] Unlike the republican citizens, whose rights, duties, and obligations areprimarily bond to the state, citizens as consumers “[are] to enact [their] democratic obligations as a form of consumption”[56] in the private sphere of the market.

The AKP’s neoliberal governance of culture hence has invested in liberalizing the cultural field by transforming it into a marketplace in order to create such a condition wherein citizens can enact their right to freedom and act upon themselves as a form of investment. The proliferation of the neo-Ottoman cultural ensemble in this regard can be understood as a new technology of the self as it creates a whole new field for the consumer citizens to exercise their freedom of choice (of identity, taste, and lifestyle) by providing them a variety of trendy Ottoman-themed cultural products, ranging from fashion to entertainment. This ensemble also constitutes a whole new imagery of the Ottoman legacy with which the consumer citizens may identify. Therefore, through participation within the cultural field, as artists, media practitioners, intellectuals, sponsors, or consumers, citizens are encouraged to think of themselves as free agents and their actions are a means for acquiring the necessary cultural capital to become cultivated and competent actors in the competitive market. This new technology of the self also has transformed the republican notion of Turkish citizenship to one that is activated upon individuals’ freedom of choice through cultural consumption at the marketplace.

Furthermore, as market mechanisms enhance the promulgation of moral-religious values, the consumer citizens are also offered a choice of identity as virtuous citizens, who should conduct their life and their relationship with the others based on Islamic traditions and values. Again, the public debate over the portrayal of the revered Sultan Süleyman as a hedonist in Muhteşem and the legal actions against the television producer, are exemplary of the disciplinary techniques for shaping individuals’ behaviors in line with conservative values. While consumer citizens exercise their freedom through cultural consumption, they are also reminded of their responsibility to preserve traditional moral value, family structure, and gender relations. Those who deviate from the norm are subjected to public condemnation and punishment.

Finally, as the neo-Ottomanist cultural ensemble reproduces and mediates a neo-Ottomanist knowledge in such commodities as the film Fetih and Panorama 1453 History Museum, consumer citizens are exposed to a new set of symbolic meanings of Ottoman-Islamic toleration, pluralism, and peaceful coexistence, albeit through a view of the Ottoman past fixated on its magnificence rather than its monstrosity.[57] This knowledge sets the ethical code for private citizens to think of themselves in relation to the other ethno-religious groups based on a hierarchical social order, which subordinates minorities to the rule of Sunni Islamic government. When this imagery of magnificence serves as the central component in nation branding, such as to align Turkey with the civilization of peace and co-existence in the post 9/11 and ECoC contexts, it encourages citizens to take pride and identify with their Ottoman-Islamic heritage. As such, Turkey’s nation branding perhaps also can be considered as a noveltechnology of the self as it requires citizens, be it business sectors, historians, or filmmakers, to take on their active role in building an image of tolerant and multicultural Turkey through arts and culture. It is in this regard that I consider the neo-Ottoman rationality as a form of “indirect rule of diversity”[58] as it produces a citizenry, who actively participates in the reproduction of neo-Ottomanist historiography and continues to remain uncritical about the “dark legacy of the Ottoman past.”[59] Consequently, Ottomentality has produced a type of subject that is constantly subjected to dividing techniques “that will divide populations and exclude certain categories from the status of the autonomous and rational person.”[60]

2016-10-5-1475705338

The Benefits Of Vaccination

Vaccination are an important part of keeping children healthy; however some parents do not agree with vaccinating their children. Vaccines are a very sensitive and controversial subject because of the safety and efficacy. Our bodies can make antibodies to protect us from diseases in two ways: by getting the disease itself or by getting the vaccinated from the disease. The vaccine is a much safer way to make antibodies without having to actually get sick and risk becoming disabled or even die from the disease. Raising a child involves many decisions. Some are a matter what color to paint their room. Others are essential, especially when it comes to security, such as preparing the house to make it safe for the baby. But do not forget about the dangers that cannot be seen and which cause serious illness, disability and even death of small children. Vaccines give you the power to protect your baby.

It seem that the benefits of vaccination outweighs the bad. In this paper I will be discussing the benefits of vaccines, why so many parents forgo vaccination because of the adverse allergic reactions or even autism. Also why am pro-vaccination and why other countries do so well and have large positive success rate with individuals who participate in vaccination programs.

Benefits to reduce and eliminate diseases that can be prevented with vaccines is one of the greatest achievements in the history of public health. But because of that success, many of “our young parents have never seen the devastating effects of diseases such as polio, measles or whooping cough that can have on a family or community. “Production of high quality vaccine have been developed by mankind with the goal to protect themselves effiently from infectious disease” (Ulrich 3). Before a vaccine is approved and administered to children, many tests are done by Scientists and medical professionals to carefully evaluate all available information about the vaccine for safety and effectiveness.

There are many stories about children who developed autism after vaccination. But the stories are not evidence, and there is no reason to believe that vaccines are causing children to become autistic. Vaccination scares have made parents to avoid immunizations, current research points out to hereditary factors. “Indirect evidence of a lack of association between MMR vaccine and autism was also provided by early ecological studies conducted in the United Kingdom10 and California.11” (DeStefano).

Rotavirus vaccination and its benefits. And how it benefitted young children. The benefits of the vaccination were seen by the results. There was a 90 percent reduction rate of hospitalizations with children that had the vaccination.  This vaccination helped saved about 924 million dollars in medical bills. (Kuehn)

In fact, parents who do not vaccinate their children may be risking the health of other children who cannot be vaccinated for very young or for other reasons. Vaccination can be the difference between life and death. Infections that can be prevented with vaccines kill more people annually than, breast cancer or traffic accidents. “In the first half of 2012, Washington suffered 2,520 cases of whooping cough, a 1,300% increase from the previous year and the largest outbreak in the state since 1942. As of Aug. 29, about 600 cases of measles have occurred in the U.S. in 2014: the largest outbreak in 20 years–in a country that the Centers for Disease Control and Prevention declared measles-free in 2000” (Offit).

When the number of unvaccinated children rises above a certain limit, the so-called “herd immunity ” will be compromised , and preventable diseases are to grip in the community. This happens when parents are anti-vaccination because of miss information or researches that have been debunked and parents forgo vaccinations believing that they are doing the best for the child. “The state allows religious and philosophical exemptions to vaccines, creating concentrated pockets where vaccination rates reach only 90 percent — including Orange County, where Disneyland is located. As vaccination rates fall, the herd immunity that protects the unvaccinated or susceptible dissipates” (Barbash).

I am pro-vaccination because unlike other parents I have done the research and I prefer to be safe than sorry. I want my children who attend public schools to be safe and healthy and also keep other safe by vaccinating my children. You may have never seen a case of polio or diphtheria; however, still they occur in other countries. With only a plane ride, these diseases can reach their community. An example is measles. Measles often does not occur in the United States due to vaccination, but it is still common in many parts of the world. The disease is brought to the United States by people who have not been vaccinated and who become infected while they are abroad. After arriving in this country, measles can spread rapidly among people who have not been vaccinated.

Conclusion: I have shown in this paper that vaccination are good for the welfare of our children, even though more parents are still against vaccination. The articles that I have researched and provided proves that my argument is valid.

Citations

Offit, Paul A. “The Anti-Vaccination Epidemic; Whooping Cough, Mumps and Measles are Making an Alarming Comeback, Thanks to Seriously Misguided Parents.” Wall Street Journal (Online) Sep 24 2014 ProQuest. 4 Oct. 2015.

DeStefano, F. “Vaccines And Autism: Evidence Does Not Support A Causal Association.” Clinical Pharmacology And Therapeutics 82.6 (2007): 756-759. MEDLINE Complete. Web. 11 Oct. 2015

Kuehn, Bridget M. “High Rotavirus Vaccination Rates Continue To Pay Off.” Jama 312.1 (2014): 18. MEDLINE Complete. Web. 12 Oct. 2015

Barbash, F. (2015). Disneyland measles outbreak strikes in anti-vaccination hotbed of california. Washington: WP Company LLC d/b/a The Washington Post. Retrieved from http://search.proquest.com/docview/1648057747?accountid=8289

Heininger, Ulrich. “A Risk–benefit Analysis of Vaccination.” Vaccine. Print.

Differences between Biblical theology, Systematic theology and Historical theology

Graeme Goldsworthy says, `you will never be a good biblical theologian if you are not also striving to be a good systematic and historical theologian` (Themelios 28, No 1, 2002). Discuss.

In the following essay I will be looking at and discussing the differences between Biblical theology, Systematic theology and Historical theology and looking at the benefits of each of them and how they work together with each other, as well as doing an evaluation of the Goldsworthy/Trueman debate.  Graeme Goldsworthy says in his book According to Plan:

“Who of us does not find at least some parts of the Bible difficult to understand?  It is easy to ignore the problems by keeping to the well-worn paths of familiar passages.  But when we begin to take seriously the whole Bible is the Word of God, we find ourselves on a collision path with the difficulties.  It is at this point that we need biblical theology to show us how to read and understand the Bible.” (Goldsworthy, 1991, p. 17)

The main aim of this essay is to explore the different methods of theology in accordance to studying the Word of God, so that we can avoid misunderstanding what a particular passage actually means and to help us to efficiently read the Bible and to put the Word of God into our own lives more effectively.

Millard J. Erickson describes the definition of Theology simply as “The study or science of God” (Erickson, 1983, p. 21)

Biblical theology:

Biblical theology is a method of studying the Bible, where you look for overlapping themes that occur throughout the Bible.  Graeme Goldsworthy points out in his book According to Plan (Goldsworthy, 1991) that a fundamental theme of the Bible is the covenant’s that God makes, so we see throughout the Bible God making covenants with people and nations.  Using biblical theology while we study the Bible helps give us a better picture of how each verse, book chapter all works together in bigger picture and how it is all relevant.

Systematic theology:

Systematic theology is a method of studying the Bible that is sometimes called Constructive theology or Dogmatic theology.  Its aim is to look at different topics of the Bible, (Sin etc.) one topic at a time and will try and summarise everything the Bible says about that topic in a well organised and structured way.

Historical theology:

Historical theology is a method of studying the Bible, were it looks back over the history of the church since the Bible was written, and looks at how Christian theology has developed.  It looks at parts of history were Christian doctrines and beliefs have changed.

Each one of these different styles of theology are a great help to us as we sit down and personally start studying the Bible or even just a certain passage in the Bible.  Not only is each one on it’s on a great help and resource to us as we study the word of God but when we start to bring each of these styles together and putting each of them into practice we are opened up to a whole new view and understanding of the Bible.

Benefits of each:

If we look at the benefits of using the method of Biblical theology, we see that it helps us with passages in the Bible where we maybe just do not have a clue like “You shall not boil a young goat in its mother’s milk. Exo 23:19 ESV), and wonder to yourself why it’s there or what it even means, and then we come to a verse like “All Scripture is breathed out by God and profitable for teaching, for reproof, for correction, and for training in righteousness” (2 timothy 3:16 ESV) and we struggle to figure out how Exodus 23:19 has any benefit to us or any relevance to our lives.  It helps us to look at how that certain passage fits in conjunction with the rest of God’s word.

The benefits of systematic theology is that, it helps us take a specific passage from the Bible, and then helps us to place it into a category of certain topics that appear throughout the Bible.   When preparing a sermon or a Bible study on a particular using systematic theology can be a great help when looking to find another verse in the Bible that is related to the passage you have or that backs up or cooperates with it.

Gregg R. Allison says in his book Historical Theology:

“One benefit that historical theology offers the church today is helping it distinguish orthodoxy from heresy. The term orthodoxy here refers to that which the new Testament calls “sound doctrine” (1 Tim. 1:10; 2 Tim. 4:3; Titus 1:9; 2:1), that which rightly reflects in summary form all the teaching of Scripture and which the church is bound to believe and obey.4 Heresy, then, is anything that contradicts sound doctrine. It is false belief that misinterprets Scripture or that ignores some of the teaching of Scripture, or that incorrectly puts together all the teaching of Scripture.” (Allison, 2011, p. 24)

I would agree with this statement made by Gregg R. Allison because as we look back on the history of the church and study it we start to see what Christian doctrines the church have accepted and decline and we can from that we can examine wither what we are about to teach or read fits in with the Christian doctrine we believe in.  Gregg R. Allison also states in his book Historical Theology: “historical theology helps the church understand the historical development of its beliefs.”  So we see by using Historical theology we are able to get a better grip and understanding of how are beliefs as Christians have development and been strengthened throughout history and that even today it is continuing to develop.

Working together.

Biblical & Systematic theology:

Don Carson says in his article, how to read the Bible and do Theology well:  “In some ways, BT is a kind of bridge discipline between exegesis and ST because it overlaps with them, enabling them to hear each other a little better.” (Carson, 2015)

In order for use to use systematic theology affectively we must use biblical theology with it and Vis versa.  We see that systematic theology looks at a passage and draws out different meanings within that passage, and that biblical theology will seek to draw the one mean meaning within the passage.  So for systematic theology to be sound in doctrine it must cross examine that passage with biblical theology in order to confirm wither or not the meanings fit in with the big picture of the Bible.  Using biblical theology a long side systematic theology gives it a stronger foundation.

When using biblical theology we look are looking for a constant theme throughout the Bible, but when we using systematic theology alongside of it, we are able to better organise our findings and better equip us to cross reference passages that contain the same theme.

Richard B. Gaffin, Jr. states in his article “Systematic Theology and Biblical Theology”: “Biblical theology focuses on revelation as an historical activity and so challenges systematic theology to do justice to the historical character of revealed truth.”

Biblical & Historical theology:

Don Carson states:

“Both BT and HT are aware of the passage of time in their respective disciplines: BT focuses on the time during which the biblical documents were written and collected, while HT focuses on the study of the Bible from the time it was completed. Put otherwise, BT focuses on the Bible, while HT focuses on what significant figures have believed about the Bible. BT functions best when interacting with HT.” (Carson, 2015)

Don Carson states in this paragraph, that Biblical Theology functions best when interacting with Historical Theology.  I find this statement to be correct as we can see both coincide with one another.  Biblical theology using Historical theology when it is look for main themes throughout the Bible.   When trying to discern wither a certain theme found within the Scriptures is a true theme or wither it is just theme that seems like it would belong there, we bring in historical theology and look back over the history of the Christian church and the doctrines of the church and see if that theme actually fits in with what Christian doctrine believes.

Systematic & Historical theology:

Don Carson says:

“When studying what the Bible teaches about a particular subject (ST), one must integrate HT. In some measure, ST deals with HT’s categories, but ST’s priorities and agenda ideally address the contemporary age at the most critical junctures.” (Carson, 2015)

In order for us to use systematic theology to its full potential, we must use historical theology alongside of it.  Systematic theology will look back over the history of the Christian church and will look at different themes that have occurred and then will look at what the bible teaches about them themes.  Historical theology will use systematic theology when trying to organise and separate themes that have occurred over a period of time in the Christian church.

Biblical theology, Systematic theology and Historical theology all overlap and work together, we see in order to use biblical theology correctly and efficiently it must use historical theology and systematic theology alongside of it.  Systematic theology will use biblical theology in order to make sure that its findings relate to the big picture of the Bible and as well as for creating a strong foundation for its findings.  Historical theology will us both biblical and systematic theology in order to examine how Christian doctrines and believes throughout the time of the Christian church relate with the teachings of the Bible.

Goldsworthy/Trueman debate:

In Carl R. Trueman’s article A Revolutionary Balancing Act, Trueman is trying to get across a point that we mostly only need systematic theology in our churches today and that while he welcomes a certain model of Biblical theology to be used while studying the bible he says that it is taking away from the true meaning of passages within the Bible.  He makes this statement in his article.

“We all know the old joke about the Christian fundamentalist who, when asked what was grey, furry, and lived in a tree, responded that ‘It sure sounds like a squirrel, but I know the answer to every question is ‘Jesus”.” (Trueman, 2002)

Basically that when we using biblical theology to exegete a passage of Scripture we manipulate it a certain way to always come to Jesus.  He then goes on to say that some passages will not be about Jesus and will have a different meaning and purpose altogether and that while using systematic theology to exegete it we will find its true meaning.

In Graeme Goldsworthy’s response to Carl R. Trueman’s article, Goldsworthy points out that we need more than just systematic theology when we are studying the Bible as well bringing to surface just how important it is to have biblical theology while we do that. He says in his article:

“The witness of the NT is that the whole of the OT is a testimony to Jesus (e.g., Luke 24:15-49; John 5:39-47). Biblical theology takes this seriously and aims to show the legitimate pathway from the text to Jesus. Even NT texts are dealt with in this way since the application of any biblical truth to a Christian is in terms of his or her relationship to Jesus.” (Goldsworthy, Ontology and Biblical Theology – A Response to Carl Trueman’s Editorial: A Revolutionary Balancing Act, 2002)

Graeme Goldsworthy is pointing out that when using biblical theology it helps us to get a clearer picture and a better understanding how each passage in the Bible leads us Jesus, wither through his actions or character.

I would agree with Graeme Goldsworthy article more the Carl R. Trueman’s as I believe that each word, verse, chapter of the Bible is breathed from God and is therefore all leading us to Him and to relationship with him, I would agree with Carl R. Trueman’s claim that systematic theology is a strong method of studying the bible to get different meanings from within one passage, even if it isn’t clearer directing us to Jesus, but I believe that when we add Biblical theology alongside systematic theology when we exegete a passage we are then able to see how that passage fits in with God’s big picture of the Bible.

In Conclusion we can see how biblical theology, systematic theology and historical theology all work together, and how each of them enhances the performance of the other in relation to studying God’s word.  In order for us to fully grasp the Bible we need to use each one of these methods so we can clearly teach and understand for ourselves exactly what each passage in the Bible is saying to us.

Bibliography

Allison, G. R. (2011). Historical Theology . Michigan : Zondervan .

Berkhof, L. (1984). Systematic theology. Norwich: The Banner Of Truth Trust.

Carson, D. (2015, September 24). How to read the Bible and do Theology well. Retrieved from http://www.thegospelcoalition.org: http://www.thegospelcoalition.org/article/the-bible-and-theology-don-carson-nivzsb

Erickson, M. J. (1983). Christian theology volume 1 . Michigan : Baker Book House.

Gibson, R. (1996). Interpreting God’s plan, Bibical Theology and the pastor. Carlisle: Paternoster.

Goldsworthy, G. (1991). According to plan. Leicester: Inter-varsity press.

Goldsworthy, G. (2002). Ontology and Biblical Theology – A Response to Carl Trueman’s Editorial: A Revolutionary Balancing Act. Retrieved from www.beginningwithmoses.org: http://beginningwithmoses.org/bt-articles/181/ontology-and-biblical-theology-a-response-to-carl-truemans-editorial-a-revolutionary-balancing-act

Grudem, W. (1994). Systematic theology. Leicster : Zondervan.

Richard B. Gaffin, J. (1976, Spring ). Systematic theology and Biblical theology . Retrieved from www.beginningwithmoses.org: http://beginningwithmoses.org/bt-articles/220/systematic-theology-and-biblical-theology

Trueman, C. R. (2002). A Revolutionary Balancing Act. Retrieved from www.beginningwithmoses.org: http://beginningwithmoses.org/bt-articles/180/a-revolutionary-balancing-act

Analyse and interpret Mark Slouka’s Short story “Crossing”: writing essay help

Write an essay (900-1200 words) in which you analyse and interpret Mark Slouka’s Short story “Crossing”

The short story “Crossing” is written by Mark Slouka in 2009. The short story brings us all on to a trip, where a father takes his little son into the forest to cross a river. The father also wants to spend some father-son quality time together out in the wild nature. The father takes his son to the exactly same places he went with father, when he was seventeen. This short story is about family bonding and a dangerous trip to the wild nature.

The father’s plan for the weekend is to take his son out on a father-son trip into the wild nature, just like his own father did when he was in his young seventeen. The father carries their packs across a shallow but fast moving river, and then he goes back and carries his son across. When they get to a barn, the father barley recognize it, they spend one night in the barn and then the next day, they move on to explore the nature. But when they cross a river again, he knows the current is a bit stronger than the other day. He crosses the river with all their things and then he gets back to the kid. When he takes the boy back across, the father lose his footing, although he do not fall, he is moved downstream with about four or five feet to a point that makes it seem too impossible to even move forward or back. He then remembers of like his father spoke to him as a kid. His father yelled “Don’t fucking fail.” In the end of the story it ends with the father in the middle of the river, telling his son that they are okay, but they are not, and say they shall hang on.

The information that Mark Slouka scatters in the beginning of the story makes us want more and it also make us care deeply about what happens to the son and the father on their trip. We see the son’s smallness and also his entire childhood in those small jeans. We see the father’s deep depression in the simple line “…and he hadn’t been happy in a while.” We know the father has a bad history with his own father and love for the river valley “…nothing much changed.” Later in the story we get to learn more about the fathers need for “…the nests of vines like something scratched out, the furred trunks, soft with rot…”Slouka has already made the father an expert on this place. In the beginning of the story we see how the father picks up his son from his regular home, with his mother. We know the parents are divorced and separated. We learn that the father has done something wrong; because of his hope to maybe he could make this right. We see his care for his son, like care not to hit his son’s head on the ceiling when he playfully takes him over his shoulder.

This leads to a strong and trustful father – son relationship. To appreciate the family bond, this is one of Mark Slouka’s main themes, where you gain an insight into the father in constant search of the role as a brave and lovely father, while bringing up the theme man vs. nature. But the theme is also about failure as a parent and also that you shall learn from other mistakes, and thing that really have a big impact that you will not forget.

The father is a man in his thirties or around that age as he has a young son and has been married. He had a bad childhood and a bad relationship with his father and therefore he will de anything to give his own child a good childhood. The father feels that he has failed a bit because he and his wife have been divorced. The fact that the father is divorced is not written directly but as he is sitting in the driveway outside a house, he looks at the yard and “…the azaleas he’d planted.” This can indicate he once lived there. When the father picks up his son in the morning, he gets a hope that he will be able to make things right again. The trip to the old barn across the river with his son is a small step, but he truly wants to make up for the bad things he has done. He needs his son’s approval and trust that are seen in how he always tries to comfort his son as they cross the river. At one point the reader gets a hint about the reason behind the divorce. “My God, all his other fuckups were just preparations for this.” It is the father who has destroyed the marriage and he knows it. In all he is very reflective and his thoughts play and important role in the story. His thoughts are marked by panic and among other thoughts he has vision of death like a tunnel at the end of the road. In the story the father does not just cross the line of the powerful phenomena of nature but he do also push the trust between him and his son to the limit as he stands thigh-high in the water after he slipped and still tells his son to just hang on, he might have reached the limit.

Mark Slouka does two tings at once in the short story. First, he takes the father back and forth across the river, with their backpacks and then with his son on his back, and then an identical set of return trips the next day. The first set of the trips allows the story to tell us the river and also the care required to cross it too. There are many flashbacks in the text. For instance, the father remembers when he was seventeen and were crossing the river with his father and asking “What do you do if you fall?” And his father answered “Don’t fucking fall.” It then becomes clear where this story is heading. But yet we also forget this inevitable end because of the second thing Mark Slouka does in the text. While the river takes a central place in the story the focus is actually on the fathers memories and thoughts. In fact, the river doesn’t even appear until later. The story opens in the house of the man’s ex-wife where the man is picking up his son and ends with an open ending where the father and son are standing in the middle of a river. This get us to worry about them and set the question of what will they do and how will they get out?

The language in the story is very easy to read, maybe because it is not an old text. But it uses a lot of the modern style in it, so then it will be easier to get through and to understand. it is not a formal language, since it says “fucking”, so it is more of an informal text.

Part of your essay must focus on the narrative technique and the significance of the setting.

Through a third person limited narrator, the reader is presented to a father, who has a hard time in the life after a divorce from his ex-wife. Then he is now determined to find something that matters and has set his heart on maintaining a strong and trustful relationship to his young son.  As the narrator only know what the father thinks, feels and remembers, it is told from his point of view. We get some scenes of the things that he struggles with. Because of the limited narrator there is no insight to the son’s mind, instead the author uses physical decryptions through the father’s eyes, who picture him as a small boy “he looked over at the miniature jeans…”

The setting of the text is that it happens in Tacoma, which is a city in Washington State in the USA. It takes place in the summer or spring?, because even though it is raining , the father and son still sleep in a tent out in the forest, and jump into the river. This indicates it is not winter. In the text we also get to know there are stars on the sky at night. This setting helps us to understand the crossing, and the current we will meet on the way. The setting of the text is pretty much based on the nature. If it was not for the place they were at the time, they would not have had to make the crossing, which in the end caused them a lot of problems. The setting of the text is describes with some adjectives, to show the colors and the feelings, the setting makes the protagonist feel, like when the father has his flashbacks.

Bug Triaging Using Data Reduction Techniques

Abstract— Open source projects for example Eclipse and Firefox have open source bug repositories. User reports bugs to these repositories. Users of these repositories are usually non-technical and cannot assign correct class to these bugs. Triaging of bugs, to developer, to fix them is a tedious and time consuming task. Developers are usually expert in particular areas. For example, few developers are expert in GUI and others are in java functionality. Assigning a particular bug to relevant developer could save time and would help to maintain the interest level of developers by assigning bugs according to their interest. However, assigning right bug to right developer is quite difficult for tri-ager without knowing the actual class, the bug belongs to. In this paper, we have surveyed the enlisted reference papers for bug triaging in which the various triaged bug reports are assigned to the respective developers using data reduction techniques.

Key-Words: Text mining, classification, software repositories, open source software projects, triaging, feature extraction

1. Introduction

Data mining is the process of extracting useful information through data analysis. It is also known as knowledge discovery. Useful knowledge obtained as a result of data mining can be use to cut costs, increase revenues or both. Target data for mining purpose is categorical and numerical having data types like integer, decimal, float, char, varchar2 etc.

Data mining techniques cannot be applied to data that is not numerical or categorical. 85% of enterprise data falls in the category of non numerical or non categorical [1]. For the success of business, knowledge extraction from this unstructured data can be critical. Unstructured data is

processed using text mining techniques so that it can be processed by data mining algorithms and techniques. Techniques from information extraction, information retrieval and natural language processing are used by text mining .

Classification is a function of data mining to assign classes/categories to items in a collection. Basic goal of classification is the accurate prediction of target class for each case in data. For example, loan applications can be classified into high, medium or low risks on the basis of classification model.

1.1 Mining Software Repositories

To understand constantly evolving software systems is a very daunting task. Software systems have history of how they come to be and this history is maintained in software repositories. Software repositories are the artifacts that document the evolution of software systems. Software repositories often contain data from years of development of a software project [2].

Examples of software repositories are:

a) Runtime Repositories: Example of runtime repositories is deployment logs that contain useful information about application usage on deployment sites and its execution.

b) Historical Repositories: Examples of historical repositories are bug repositories, source code repositories and archived communication logs.

c) Code Repositories: Examples of code repositories are Google code and codeforge.net that store source code of various open source projects [3].

MSR is the process of software repositories analysis to discover meaningful and interesting information

hidden in these repositories. There is a huge Software Engineering data over the course of time. MSR picks this data, processes and analyzes it, and detects patterns in this data. MSR is an open field, both in what can be mined and what one can learn from the practice. Any software repository can be mined not necessarily the code, bug or archived communication repositories.

2. Problem Formulation

Triaging of bugs to developer to fix them is a tedious and time consuming task. Developers are usually expert in some particular area. For example few developers are expert in GUI and others are in pure java functionality. Assigning a particular bug to relevant developer could save time as well as would help to maintain the interest level of developers by assigning those bugs according to their interest. However assigning right bug to right developer is quite difficult for tri-ager without knowing the actual class a bug belongs to. This research proposes a technique for classification of open source software bugs using the summary provided by bug reporters.

2.1 literature survey

Some of the already implemented techniques for software bugs classification are:

a) Micheal W. Godfrey, Olga Baysal and Robin Cohen presented a framework for automatic assignment of bugs to developers for fixation using vector space model [4].

In this paper[4], the authors have proposed a specimen of the intelligent system that instinctively conducts the bug assignment. They have employed the vector space model to infer information about the developer’s expertise from the history of the previously fixed bugs. The vector model is used to retrieve the title and the description from the report to build a term vector which later can be used to find similar reports by mining the data in the bug repository. In order to create an efficient bug triage model, the authors conducted a survey wherein they collected a feedback from the developers regarding their previous bug fixing experience, their satisfaction with the bug assignment, whether they were successful and confident in handling bugs in the past, etc. The overall information provided them the initial estimates for the proposed model. This in turn helped them to implement the specimen model and test it within a software team working on the maintenance activities.

b) Hemant Josh, Chuanlei Zhouang, Oskum Bayrak presented a methodology to predict future bugs using history data [5].

In this paper[5], the authors have presented a bug prediction algorithm, the purpose of which is to predict the number of bugs which are to be detected and reported in each month. So the bug prediction of any month basically depends upon the bug count of the precursor month. All this prediction is achieved through the prediction alogorithm implemented in the respective paper[5].

c) Lei Xu, Lian Yu, Jingtao Zhao, Changzhu Kong, and HuiHui Zhang proposed a technique using data mining that automatically classifies the bugs of web-based applications by predicting their bug type.

In this paper[6], the authors have put forth the debug strategy association rules which find the relationship between bug types and bug fixing solutions. The debug strategy acquaints us with the erroneous part of the source code. Once the errors are found then it is very easy for the developers to fix them. The determined association rules help to predict files that usually change together such as functions or variables.

d) Nicholas Jalbert and Westley Weimer proposed a system that checks the duplicacy in the bug reports.

In this paper[7], the authors have instantiated a model that automatically indicates whether an arriving bug report is original or duplicate of an already existing report. It saves developer’s time. To predict bug duplication, system they have made use of rudimentary methods such as textual semantics, graph clustering and surface features.

e) Tilmann Bruckhaus provided a technique for Escalation Prediction to avoid escalations by predicting the defects that have high escalation risk and then by resolving them proactively [8].

3.  Problem Solution

This section describes the proposed system for bug classification, data used for classification task and results obtained in different experiments.

3.1 Input Data

Eclipse and Mozilla firefox data is obtained from bugzilla -an open bug repository [9] [10]. Dataset of almost 29,000 record set is obtained. This data is divided into training and testing groups and experiments are performed on different set of data from these groups.

3.2 Model for prediction

When the bug is first reported to repository, it is submitted to our proposed system as shown in Fig. 1. System extracts all the terms in these reports using bag of words approach. The vocabulary is that of extremely high dimensionality and thus numbers of features are reduced by using feature selection  algorithm. These features are used for training of classification algorithm which is then used for classification of bug reports. The classification algorithm used in proposed system is multinomial Naïve Bayes.

3.2.1 Pre-processing

Data pre-processing is the most important step of data mining. Data obtained from bug repositories is in raw form and cannot be directly used for training the classification algorithm. The data is first pre-processed to make it useful for training purpose. Data pre-processing is the major time consuming step of data mining and most important as well. Stop-words dictionary and regular expression rules are used to filter useless words and filter the punctuations respectively. Porter stemming algorithm is used to stem the vocabulary

3.2.2 Feature Selection

The vocabulary obtained after applying “bag of words” approach on data has very large dimensionality. Most of these dimensions are not related to text categorization and thus result in reducing the performance of the classifier. To decrease the dimensionality, the process of feature selection is used which takes the best k terms out of the whole vocabulary which contribute to accuracy and efficiency.

There are a number of feature selection techniques such as Chi-Square Testing, Information Gain (IG), Term

Frequency Inverse Document Frequency (TFIDF), and Document Frequency (DF) . In this research, we are using feature selection algorithm.

3.2.3 Instance Selection

The vocabulary obtained after applying “bag of words” approach on data has very large dimensionality. Most of these dimensions are related to Our pre defined bugs and thus result in reducing the performance of the classifier. To decrease the time, the process of Instance selection is used which takes the best k terms out of the whole vocabulary which contribute to accuracy and efficiency. This selection is fast instance of feature selection.

3.2.4 Classifier Modeling

Text classification is an automated process of finding some metadata about a document. Text classification is used in various areas like document indexing by suggesting its categories in a content management system, spam filtering, automatically sorting help desk requests etc.

Naïve Bayes text classifier is used in this research

for bug classification. Naïve Bayes classifier is based on

Bayes’ theorem with independent assumption and is a probabilistic classifier. INDEPENDENCE means the classifier assumes that any feature of a class is unrelated to the presence or absence of any other feature.

5. Conclusion

In open source bug repositories, bugs are reported by users. Triaging of these bugs is a tedious and time consuming task. If some proper class is assigned to these bugs it would be easier to assign these bugs to relevant developers to fix them. However, as reporters of these bugs are mostly non-technical it would not be possible for them to assign correct class to these bugs. In this research an automated system for classifying software bugs is devised, using multinomial Naïve Bayes text classifier. Feature selection algorithm and instant selection algorithm are used for bug triage. Maximum prediction accuracy is obtained.

6. Future Work

The main challenge to address in the future is strategic developers. In time, developers could learn how the system

assigns bug fixing tasks and try to manipulate task assignment. Thus, we should ensure that the assignment of bugs is a fair and manipulation-free process.

References

[1] A. Hotho, A. Nürnberger and G. Paaß, “A Brief Survey of Text Mining,” vol. 20, GLDV Journal for Computational Linguistics and Language Technology, 2005, pp. 19-62.

[2] A. E. Hassan, “The Road Ahead for Mining Software Repositories,”

IEEE Computer society, pp. 48-57, 2008.

[3] S. Diehl, H. C. Gall and A. E. Hassan, “Special issue on mining software repositories,” in Empirical Software Engineering An International Journal © Springer Science+Business Media, 2009.

[4] O. B. Michael and G. C. Robin, “A Bug You Like: A Framework for Automated Assignment of Bugs.,” IEEE 17th international conference, 2009.

[5] C. Zhang, H. Joshi, S. Ramaswamy and C. Bayrak, “A Dynamic Approach to Software Bug Estimation,” in SpringerLink, 2008.

[6] L. Yu, C. Kong, L. Xu, J. Zhao and H. Zhang, “Mining Bug Classifier and Debug Strategy Association Rules for Web-Based Applications,” in 08 Proceedings of the 4th international conference on Advanced Data Mining and Applications , 2008.

[7] N. Jalbert and W. Weimer, “Automated Duplicate Detection for Bug Tracking Systems,” in IEEE computer society, 2008

[8] T. Bruckhaus, C. X. Ling, N. H. Madhavji and S. Sheng, “Software Escalation Prediction with Data Mining,” in Data Mining, Fifth IEEE International Conference, 2006.

[9] [Online]. Available: https://bugzilla.mozilla.org/.

[10] [Online]. Available: https://bugs.eclipse.org/bugs/.

Conformity & Obedience: essay help online free

There are many forms of Social influences, this indicates a change in behaviour affected by others and shows the way opinions are formed, establishes actions and effects everyday lives, this happens even with out being aware of it as people have a strong desire to be liked by others.

Conformity is a social influence which increases the act of changing a belief or behaviour to fit in with surroundings then soon as a certain way of doing things is recognised as a norm people begin to conform and give the impression of the `right` thing to do.

One study on conformity by Sherief  in 1930s, the aim was to determine weather a person would conform to group or social norms whilst put in ambiguous situations.

Sherif put members within a dark room showing a small spot light projected onto a screen appearing to move even though it was not, it was found people alone in the room made different judgments on the movement. Sherif then manipulated part of the group by joining three together having two with similar results plus one with very different results first time round, this time though they had to say an answer out loud on how much the light moved, after a numerous of times it showed all the results got more similar each time.

Results found in ambiguous situations, less previous experience a person had of this situation more powerful the influence was. Their was no right or wrong answer in this experience as the light was an illusion anyway.

Asch also done a study in 1950s on conformity, though that whilst the studies of Sherif showed aspects on conformity they did not show how social or group pressure would effect were their was no right or wrong answer, Asch wanted to determine weather people would conform the same even though something’s obviously wrong.

Asch carried out a pilot study with naïve male members including seven associates showing them a paper with three lines on with different lengths asking them to match it to a line they previously got shown being obvious, associates purposely give the wrong answer, it turned out the members agreed and this shows people will conform even though the answer is incorrect. After 12 trials on this study 75% conformed.

Both of these studies lacked ecological validity as it was not within a real life situation and was done with in a laboratory setting, also these are gender bias all males were used so would no apply and lack population validity.

Obedience is another type of social influence, this has negative and positive sides, positive side, people obey laws, authority, figures and sensible instructions. On the negative side it can be destructive causing crime, Example.. From history one group of people killed another Nazi in ordering German soldiers to kill millions of Jews in the second world war.

Milgram is also well known for social influence studies, his experiment he wanted to find out how far humans would go when a authority orders to hurt another, members agreed to take part and pretend to have an electric but the associates giving the shocks really though these were getting hurt, this means they were deceived and also baldy stressed, this experiment does breach the code of ethics by the British psychologist society and would not be measured ethical now. Ethical issues from this were these members were deceived they were led to believe they had really given high voltage shocks, although suggestions were made members may have known the study was a deception but carried on knowing no harm was made, this would mean the research would not measure obedience.

Fledman & Scheihe was another obedience study taken in the 1970s this was to see what factors caused a person to rebel against an authority, college students were asked to fill in a personal embarrassing questionnaire in front of others , these were actually associates. First state members showed willingly filling in the questionnaire, in another they refused and said they were leaving,. Results showed members in the first state were more likely to complete than the others.

This means people are more likely to refuse authority requests that are not good and social support is their from others. It was taken away from others refusing to fill in the questionnaire in this experiment.

Obedience is necessary to authority, society or system, imagine what it would be like if soldiers in the army refused their orders by their commander.

All these studies were ethical and lack validity over time and woukd be interesting to see the difference in time.

Ethical issues are very important, psychologists need these to take part in research and would want members to feel safe. People whom more likely to show independent behaviour resolve higher in a social responsibility scale than a person who conform or obey.

Websites

http://www.psychteacher.co.uk

http://newworldencyclopedia.org/entry/stanley_milgram

Text books

Gross, R. D. (2010) Psychology; The science of Mind and Behaviour. 6th edn. London: Oxford University Press. (Gross, 2010, pp. 400-407)

Boswell, Karen, Dave Robertson, Liz Dancer, Donald C. Pennington, and Julie McLaughlin. 2002. Introducing Psychology: Approaches, Topics and Methods. London: Hodder & Stoughton. (Boswell et al. 2002: 159-172)

Class notes: Social influence

Moodle power point.

Paganism in Beowulf: college admission essay help

Beowulf was written around 8th century England, a classic poem that is considered the oldest epic in British Literature. One of the longest surviving Anglo-Saxon poems. It tells of the exploits of a noble and brave Scandinavian hero who battles and defeats a monster by the name of Grendel who preyed on the Danish knights. This poem spoke of a time when society’s progression was converting from the Paganism religion to the Christian religion. The Christian influences in the poem were combined with the early folk tales and heroic legends of the Germanic tribes. Reading this epic, you can contemplate that Beowulf believes in GOD, however, the mention of pagan practices are throughout the poem. Paganism may be capable of overshadowing the elements of Christianity in the poem. As a matter of fact, Christianity and Paganism are closely linked with each other in the poem. The reason being, Beowulf has both the influences of Christianity and Paganism, which could make it confusing to the readers.

The Paganistic elements that were used in the poem Beowulf are shown by the character’s having superhuman personifications. Beowulf is shown as a superhero, who takes it upon himself to save the Great Danes from the monster Grendel, Grendel’s mother and his final fight with the dragon. In his battle with Grendel, Beowulf chooses not to use the weapons he acquired against the monster, he wants to fight him in a fair fight. So Beowulf relies on his super strength to win the fight. During the fight, his strength takes over and he wrestles with Grendel until he is able to rip Grendel’s arm out of socket. Now let’s start with how Christianity was introduced into the poem. The way Christianity was introduced by the character Beowulf is always trusting and recognizing God as his protector. Also, how God used him as an epic hero type to slay the monsters, Grendel and others that are hurting King Hrothgar and his people. King Hrothgar also talked about how GOD raised him in power and placed him over all men.  How GOD has power of all things and life is a gift from GOD. Beowulf’s courage and faith are shown throughout the story, “None of the wise one regretting his ongoing as much he was loved by the Geats and they urged on to adventure” (114-119). Beowulf fought Grendel and his mother and risked his life for his fellow warriors.

The character Beowulf is shown as a similar hero of the Biblical times, David who took down the giant named Goliath with a sling and a rock that was hurled at Goliath’s head. Beowulf can be seen in this story as a Christian hero. The author shows that there is evil within the story such as the monsters, Grendel, his mother and the Dragon who took Beowulf’s life. The author introduces Grendel at the beginning of the story, it is said that the monster Grendel is to bare the mark of Cain. That he’s a monster who lives in a swap, a quote that shows the essence of Grendel, “Grendel, is a monster who haunts and kills people, lives in the wild marshes, and made his home in a hell, but not on earth, He was created in a like slime way, conceived by a pair of those monsters who were born of Cain descendants, murderous creatures banished by God, punished forever for the crime of Abel’s Death” (17-24). In the Old Biblical Testament of Genesis chapter 1 it talks about how Cain and Abel are the sons of Adam and Eve. How Cain went to kill his brother Abel because, he was jealous, and angry of his brother’s abilities.

Just like how the snake who Eve was conversing with in the Bible told her to take a bite of the apple in the Garden of Eden. Also, the character of the monster Grendel, is a man like creature who, seeks a full meal by devouring human beings. During Grendel’s attack the following night we are informed that Grendel is becoming much more evil and murderous in the night. Grendel seems to be a representation and a symbol of evil for all Christians who worship the Lord, but also at the same time Grendel symbolizes the unfairness and the closed minded way people think. Now the way the character Beowulf is played on showing Christianity in the poem is by, Beowulf thanking God and acknowledging how GOD guarded him during his battle with Grendel and his mother. A quote that shows what Grendel looks like “bore hardly that he heard each day loud mirth in the hall” (88-89). Basically it’s a sound that Grendel’s mother is about to enter the room.

Another time before Beowulf goes to fight Grendel, Hrothgar goes to Beowulf and says, “Surely the Lord Almighty could stop his madness, smother control the outcome”. When Beowulf is getting ready to fight Grendel, he says, “I fancy my fighting strength, my performance in combat, at least as great as Grendel does his therefore I shall not cut short his life with a slashing of a sword-too simple a business (675-680). This is showing that Beowulf is going to fight the monster Grendel on equal terms and not dishonor himself by taking the easy way out, just like Jesus knew that he was going to be betrayed by Jude and was going to be killed.

After Grendel was killed, Hrothgar stands on the steps of the halls with Grendel’s hand and says, “Let swift thanks be given to the Governor of All, seeing sights! I suffered a thousand spites from Grendel: but God worked ever miracle upon miracle, the master of heaven” (925-930). Basically what is being said here is that Hrothgar is glad that God has helped in the fight against Grendel and that now they can be at peace without worrying what will happen to them next. It just like that one time in the Bible when the people of Israel left Egypt to find a better way of life. When Grendel’s mother comes to avenge the death of her son, she only has a limited amount of time to seize one noble to kill and the bloody hand of her son Grendel. When Beowulf comes to Hrothgar he tells him, “It is better for a man to avenge his friend than to mourn too much”. Then he also says that even when death comes, to everyone that everyone will die sooner or later and then he suggests that they follow Grendel’s mother back to her lair immediately before more deaths come up. Then again Beowulf’s superhuman tribulations appear again with the fight with Grendel’s mother.

When he’s pulled into the water by Grendel’s mother, he keeps going down and down like going through different layers of hell. Until he reaches the bottom, while he is swimming he uses no oxygen while going down to the bottom to reach his newest challenge that awaits him. During the battle with Grendel’s mother, Beowulf realizes that the sword that he has which is called, Unferth is useless against the monsters thick skin, it won’t cut into it at all. So he then grabs another sword that he found in the lair, which was almost too heavy to hold and slashes through the body. Now comes the part of the poem where the dragon is introduced, the dragon is shown as a super powerful enemy to get through. By the time Beowulf is an old man, there is an issue at hand, someone enters the tomb of the dragon and awakens it and now it starts attacking the people of Beowulf’s kingdom and villages.

So now Beowulf decides that enough is enough and goes to avenge his people by fighting the dragon. Beowulf gets injured while fighting the dragon. The dragon spits out fire towards Beowulf that it melts the sword that he has with him. Then the dragon bites down into his neck and wounds him deeply, but, Beowulf does deliver the last blow to the dragon and kills it. In most pagan stories, the dragon is seen as an enemy to the hero of the story. The author shows that fights with imaginary monsters is a conflict between the power of good vs. evil or right vs. wrong. These battles are example, if epic folklore hero tales during the pagan times. Now the articles that I’m going to be introducing will tell you more about what I’m talking about.

The first article is called, “The Religious Principle in Beowulf” which talks about the many religious views and how the elements of those views are played out in the poem of Beowulf. It talks about the bible and how the monster Grendel is a descendant of Cain, a quote that supports this is, “Thence sprang all evil progeny, giants, etc., including that strove against God” (39). Having alluded again to Cain’s action, the poem reads “Thence sprang many accursed souls, including Grendel, hateful, savage reprobate” (105-107).

The second article I read is called, “Grendel’s Motive in Attacking Heorot” which talks about that supports the information why Grendel attacked the king Hrothgar. Grendel attacked the city of Heorot because, he is shown as a man eating monster who seeks a full meal by devouring human beings. During Grendel’s attack the following night we are informed that Grendel is becoming more murderous in the night and becoming more evil. A quote that shows, what Grendel is about to do, like “bore hardly that he heard each day loud mirth in the hall” (88-89). This is basically talking about when Grendel’s mother is about to attack Heorot meeting hall and kills one of Hrothgar’s closest friends. In addition, there is a third article that I have read that is called, “The Essential Paganism of Beowulf, which talks about the Christian elements in the poem of Beowulf and the most un-Christian pessimistic views on the way of life and history. The background of Beowulf is Scandinavian history. So the Paganism views are shown in a different way. Here are a few of the ways that paganism was introduced, the ship burials, cremation and Beowulf’s sword. The ship burials were a Pagan practice, being buried with you earthly materials to take into the next life. Cremation was also considered a Pagan practice, being put on a wooden platform when you die and burned.

Then there was the special sword Hrunting which was considered pagan because of the engraved symbols in a pattern called ill-boding.  Also a quote that represents Paganism is when Hrothgar gives a speech on humility, “But the Lord was weaving a victory on His war loom for the Weather Geats” (696-697). In this image, the author unites the Christian God with pagan imagery, the loom of fate, on which men’s lives are woven. Weaving, spinning and threads were common metaphors for life and fate in Scandinavian culture. By adopting these traditional pagan images, but using them in a Christian context the author tries to negotiate between the two religions.

Another example of paganism is when the leaders were desperate with fear of Grendel, so they turned away from God towards the old pagan ways, offering up sacrifices. A quote that shows this is, “Betimes at heathen shrines they made sacrifice, asking, with rites that the slayer of souls would afford them relief against their people’s great pain” (99-100). Beowulf is a blend of pagan beliefs, mixed in with that of the Christian faith.

In the end of this tale of good vs. evil or right vs. wrong, Beowulf died as a hero in the eyes of the Pagans.

Planning digital info system for museum

1.1 PROBLEM DESCRIPTION :

1.1.1GENERAL:-

“Digital info system” is a system for the museum or any gadgets show room.it is a system which provide facility to visitor to get information easily. “digital info system” includes following features.

1. Provide facility to visitor to getting information in digital way

2. Reduce headache of boarding written description.

3. Visitor get information in their own language which they understand easi-ly.

4. Also they get information in audio and video form.

5. Also this system include a device tracking system.

We include 4 modules in these project.Which help to create these system. The four module are admin module, visitor module, tracking module, and GPS module.

1.1.2 PROBLEM TITLE:-

As per our visit in museum we have seen that everything were in written boring long  descriptions so people do avoid it because the problem is that someonecannot understand English language, some  people get board because of long description, and some are interested in reading but because of more crowd they can’t read.

1.2 OBJECTIVE OF PROJECT :-

Aim of this project to make a user friendly and information web base system.

“Digitalinfosystem” is web base application

• The application stores information about all the things which are available in museum

• With this application visitor can easily getting detail information about things in easy or you can say digital way which is interesting way for normal human.

• It reduced paper work by maintaining huge data with least efforts.

• Computer system is capable to store large amount of data and gives true information about particular thing.

* The project aimed is to reach the following goals:-

1. User friendly interface for visitor.

2. Visitor can get some knowledge in their own language.

3. Visitor can see the video and listening audio of things.

Visitor also save that video in smartphone and they can see that video at free time

Also visitor have GPS system so without wasting time they reached at that place or things which they want to see.

1.3 BACKGROUND:-

In 2015-16 MrShehbazMunshi, Miss Nafisa Patel, MrJignesh Patel, MrSagarPatoliya are formed a project team to develop a system “digital info system”. The team meet daily for 6 months (5 hours daily). Data collection, data designing, learning HTML, cascading style sheet, java script and PHP, designing the system interface, live data entry and working on the various search criteria were the steps involved.

1.4 SOFTWARE REQUIREMENT SPECIFICATION:-

1.4.1 GENERAL:

The initial study suggested that the project was feasible in all aspects without any risk.in our project development we have selected the “waterfall model” for software development process. We have used this model because of its simplicity that it provided. Also the sequential approach of this model helped us a lot to have a good insight and get a proper direction for our project.

The waterfall model helped us to divide our project in a set of phases that helped the project to progress through in a sequential manner. Changes are not supported to happen when we use the waterfall model, hence the changes are very limited and the process is tightly controlled. This has been another added advantage to our project, as we had to determine the requirements and specifications well in advance.

1.4.2 TECHNICAL REQUIRMENTS:

In order to create a functioning database the following issues must be addressed:

1. Data Definition

2 System Functionality

3 Technical specification

1. Hardware

2. Implementation

3. Information Support

1.4.2.1 DATA DEFINATION:

The data definition is a catalogue of the specific information that is required for the database. The various attributes related to “digital info system” system are identified.

1.4.2.2 SYSTEM FUNCTIONALITY:

The functional requirements of the database include what databases have to do and how it would be used. The general functional specifications for the project are to be identified. The database is to be created as a data warehouse. There are live data entries into the system. Sessions via user connection with scanner for privileged users, in order to add, delete, modify and update data and to insert new customized data into the database.

1.4.2.3 TECHNICAL SPECIFICATION:

The technical specification for the creation of database can be divided into four subparts:

1. Hardware

2. Implementation

3. Information Support

4. Software.

1. HARDWARE REQUIREMENTS:

For the development of this project the various Hardware resources are used

• SCANNER

• HAD PHONE

• SMARTPHONE

• ROUTER

• LED

2. IMPLIMENTATION:

The technical requirement for the implementation of our project are:

• Design of database.

• Creation of the interfaces that insert data into the database.

• Creation of the interfaces that edit data into the database.

• Creation of the interfaces that delete data into the database.

• Scripting of the interface to run automatically.

• Design of the web pages to display information.

• Design the web interfaces to access the data in the database.

3. DATABASE DESIGN:

The design of the database is the most important element of the project. The basic key structure for all the tables I to be determined. The relational likes are worked out. The prototypes of the major tables are worked out. The framework must be solid.

4. WEB ACCESS:

Client would like for the database to be accessible on web. A web interface is to be developed to allow privileged users to access the according to their specific needs.

5. INFORMATION ACCESS:

A plan is divided to try to minimize the amount to time spent gathering and inserting data into the database. Essentially the adopted was to insert the existing information using the web interface.

1.4.2.3 SOFTWARE REQUIREMENTS:

For the development of this project the various software resources are used.

FRONT END :   PHP

BACK END  :   MySQL

WEB SERVER :  Apache

DOCUMENTATION TOOL :  Microsoft Office word 2007

OPERATING SYSTEM  :  Microsoft Windows XP

1.5 PROBLEM SUMMARY:-

As per our visit in the museum after the interaction with it we came to know some of the problem of existing system. Which is as followed:-

1. Visitor get board to read description.

2. Visitor cannot captured a pictures of museum things.

3. Visitor don’t have to seeeverything of museum because they do not know about museum map.

4. There is more paper work.

5. No data is stored securely.

2.1 CANVAS ACTIVITY:-

Canvas 1:-

AEIOU Summary:-

A=activity.

E=environment.

I=interaction.

O=objects.

U=users.

Canvas 2:-

Identification :-

Canvas 3:-

Production Development canvas:-

Canvas 4:-

Empathy canvas:-

2.2 FEASIBILITY STUDY:

The final complete feasibility study suggested that the project objectives and requirement were in according with technical knowledge of the members and technical requirements of the project could be met. Under all normal circumstances the project could be successfully completed within the given time frame.

2.3 UML DIAGRAMS:-

2.3.1. E-R DIAGREM:-

2.3.2 DATA FLOW DIAGRAM:-

CONTEXT LEVEL:-

VISITOR LEVEL 0:-

ADMIN LEVEL 0:-

2.3.3 USECASE DIAGRAM:-

ADMIN:-

VISITOR:-

2.3.4 SEQUENCE DIAGRAM:-

3.1 DATA DICSONARY:-

Museum_detal:

Visitor_dec:

langselect:

desc_table:

gps:

content_download:

Admin panel

Admin_reg:

Admin_login:

add_detail:

content_mod:

4.1 DETAIL DESCRIPTION:-

Main problems:-

In this existing application we see some problems and outcomes.

1. Problem related to description of stored things:-

2. Problem related to language:-

3. Problem related to find out stored things in museum:-

4. Tracking.

4.2 PROBLEM SOLUTION:

PROVIDING DESCRIPTION IN DIGITAL WAY:-

NAME: – SHEHBAZ MUNSHI  ENO:-130943107010.

PROVIDING DIFFERENT LANGUAGE TO FACILITE USER:-

NAME:- NAFISA PATEL ENO:-130943107016

PROVIDIN SCANNER FACILITY IN MUSEUM FOR STORED THINGS:

NAME:- JIGNESH PATEL ENO:-120940107029

NAME:-SAGAR PATOLIYA ENO:-110940107036

4.3. OUTCOME OF “DIGITAL INFO SYSTEM” APPLICATION:

As we have selected the project “digital info system” we would like to implement and solve the problems that we have identified in the existing system.

For example:-

Now a days in  museum the things like statue and traditional cloths of kings and traditional things of old generation etc. is represent with written long description on paper so because of this the problem is that the visitor should have to read the description for getting information of particular things  For that crowd attack is crated. Also some people are lezzy they don’t like to read description that’s why they do not get information.

Also the description is available in English language in all most museum so visitor should have to know the English language compulsory for read that description.

For this problem we make this system in which we provide the facility to get information in easiest and interesting way. We provide one device for getting information about things in digital way. we also provide a GPS system for knowing  the place of particular things which are exactly at which place/point in the museum.

Now the second problem is that every people can not understand  the English language so many people can not get right information so we have to  provide facility of different language from that visitor select language and listen description in understandable language.

PRONS AND CONS OF THE PROJECT:

ADVANTAGES OF NEW SYSTEM:-

• Information is available in interesting way

– Means when  visitor come into the museum for knowing about/ getting information about stored things in museum they aver about the history. So this is the easiest way for getting information.

• GPS system.

– Gps system proving for finding out where the visitor is currently standing and where the visitor should have to go for visiting the particular things of museum. GPS help to find things so visitor can utilize there time.

• Information available in every popular language.

– Suppose visitor is come from Gujarat he/she have not knowledge about English language and he/she will interested to know about some things so he/she will not read description in proper or right way so we provide dif-ferent different language and visitor is selected from that language.

• Time utilization.

– People when read the description so much time is west so we provide fa-cility for time utilization by audio and video form of information.

DISADVANTAGES:-

• Scanner is required.

• Internet connection is compulsory.

5.1 REFFERANCES:-

Website:-

1. www.w3school.com

2. www.php.net

3. www.googleimage.in

4. www.jahia.com

Reference book:-

1. Complete Reference of PHP.

2. Beginning PHP and My sql.

Analysis of article ‘Once Upon a Shop’ by Jeanette Winterson: essay help free

In modern day society, big grocery shops like Tesco are ruling the market. The intern fight of having the lowest prices is never ending. The price war is in no way positive for the small amount of small grocery stores. This fight of becoming the cheapest shop makes it more and more uncommon to see small private owned grocery stores because they cannot afford to keep up with the low prices. In the article ‘’Once Upon a Shop’’ by Jeanette Winterson published the 13th of June 2010 she expresses her opinion about the way the market works. Jeanette Winterson is through her article trying to persuade the reader of the text to stop buying groceries from the big shops and start supporting the small private owned grocery stores.

Jeanette Winterson is the owner of a vegetable shop in Spitalfields, which is located in the east end of London. She owns the store with her friend Harvey Cabaniss. Winterson’s article is trying to convince the reader to eat green and organically like, she does. She is through her own biased opinion trying to convince the reader not to eat fabricated food from the big shops. Winterson to convince the reader to start eating healthy and organically because she believes that, it is the healthiest way of living. Winterson says that if we keep buying our food from the big companies they will take over all of the production and nothing can be done to prevent it. She has different arguments to prove this point. For example her first argument where she says that the big companies does not care about the quality of their products but only cares about making the biggest profit. She also criticize the government by accusing them of being one of the main reasons of this negative development. Winterson says that the government by only helping the big companies will make sure that the small private owned grocery shops will go bankrupt.

Winterson is very good at using different methods to get the reader on her side. First of all her title ‘’Once Upon a Shop’’ makes the reader feel sorry for her as she describes how the area looked when she opened her first shop(Page 7, Line 1-2). It makes the reader want to finish the article to get the reason behind the title. She also mentions that everything was very familiar back then and she knew many of the other shop owners but know she does not anymore. That also makes the reader have more empathy for the rest of her article and only wishing the best for her. At page 7 line 36-39, she is describing in an exact way how it all was and that makes the reader able to see it for himself and picture it which also helps her to prove her points. That is also one of her ways of using the history of her shop in the article to affect the reader’s opinion on the case.

In the article Winterson is using informal language with short and simple sentences to make sure as many readers as possible will understand her article and be able to relate to her story and feel empathy with her. The informal way Winterson uses her language is the same way she wants the reader to see her shop. She likes her shop very much and she thinks that it is much better than the big companies are. The very casual way she describes her shop also makes the kind of people how enjoy the smaller grocery shops more likely to finish the article and support her. Winterson is also questioning the audience by asking ‘’what can we do?’’(Page 10, line 248) and by using the word ‘’we’’ she is trying to encourage the audience to help her fighting the big companies. She is also trying to get more empathy from the readers by saying ‘’The bottom line isn’t profit; it is being human’’ (page 10, line 232-233). All in all she is trying to convince the reader that her products and her shop is way better than ‘’The chilly world of corporate retail’’ (Page 10, line 259-260)

It is obvious through the whole article that Winterson is trying to persuade the readers of the article to fight with her against the big companies. Her main argument is that organic food is way better than the food the big companies produce and therefore the readers has to help her stop the big companies. Winterson is a good user of pathos and by that she very good to state the problems and get supporters to her way of thinking. As earlier mentioned Winterson is using simple and informal language, which could attract more readers as it is a simple article to read and it is a current problem. The way that she is criticizing the big companies could make her article look a bit too one-sided and in that, way she might lose many supporters. Some readers might even think that she is crazy to start a fight against the big companies. Through the article, Winterson does not use any expert knowledge or any data that could prove her right and that weakens her arguments a lot. The lack of ethos in her arguments weakens them a lot. The way she describes her small shop might still attract many customers who prefer the organic food and small grocery shops but Winterson’s idea of replacing the big shops with small grocery stores like hers might be a bit too ambitious. She would need to work a lot on her rhetorical methods if she wants to write an article that would actually persuade the majority to boycott the big shops and only buy their foods from the small grocery shops.

The relationship between private bank profits and the Dutch economy and democracy

Topic: The relationship between private bank profits and the Dutch economy and democracy

Abstract

“Money is power”

A popular idea in the world is imagining how much power an individual would have if he or she could create money. However, the practice of making money has been employed since the first coin.

Banks create money with the use of loans. They control where and in what way newly created money goes into the economy. This gives banks, in a way, more power than a government.

This topic has been brought to light a few months ago in the Netherlands trough a group of stage actors called “De Verleiders” (the seducers). Their play and ideas have gotten a lot of attention by the media and they have started an initiative to get this topic to the Dutch government. This paper will first explain commercial moneymaking and will look at the way banks function. Secondly it will explain what it would mean for democracy if the power of banks would be in the hands of a government. In conclusion the opinion of the researcher will be given about this topic, together with a proposed solution to this problem.

1. Commercial moneymaking

There is a Dutch law that states: ‘The livelihood of the people and spread of wealth are in the care of the government’ (artikel 20 lid 1).

However, current Dutch money system is dominated by commercial banks.

These banks control the money supply and therefore have a big influence on the Dutch society.

First of all, there are a lot of misunderstandings about money and the Dutch monetary system. Most people see it as a system that can’t be changed, a system that is there to stay. In other words, money and the Dutch monetary system isn’t seen as something that is created by people and controlled by people. And yet it is. Even specialists, economists and bankers often give a wrong definition of what money is.

For example, the bank of England has recently stated that most of our economy school books give a wrong explanation about how money is created. (http://www.bankofengland.co.uk/publications/Documents/quarterlybulletin/2014/qb14q102.pdf).

1.1 What is commercial moneymaking?

How banks create money is simple. It goes as follows: when a person, company or country is in need of money he or she visits a bank. This bank will asses if the person or subject can pay back the money they want to lend. When the person or subject is suitable for the loan, the bank shall make a contract and fill in the balance sheet on the computer. Both sides of the balance sheet will be raised by the bank. Money is equal to bank debt. When the person uses this money to pay for things, there is new money in circulation. So to say it in simple words: typing numbers in a keyboard is enough to put digital money on an account.

Private banks can thus create money through the accounting process that they use when they lend money. The numbers that are seen in the books of the account are basically just entries in the computer system of the bank. These numbers are a liability from the bank to the customer.

By using your bank card or mobile banking, the customer can use these digital accounts in the same way as physical money. This makes the digital money from private banks a perfect substitute for physical money from the central bank.

By creating money this way banks have pushed out the credit money, bills and coins, and created a system where banks can control the money supply. Eventually the debt burden became too high and resulted in the financial crisis.

Figure 1 shows that 97% is deposit money and only 3% is left as cash money.

However, private banks are still under somewhat of control. For example; the Basel II

Accords have been made to keep an eye on the way money is created.

The Basel Accords determine how much equity capital – known as regulatory capital – a bank

must hold to buffer unexpected losses.

These Accords are applied under supervision by the central banks and governments.

It is well known that private banks want to make a profit, and in the mean time they have the privilege to create money.

1.2 What are the consequences of commercial moneymaking?

A major cause of the financial crisis was the Housing Bubble, which exploded in 2006.

In the years prior to the financial crisis the private banks had been raising the amount of money they created every year.

At the start of 2007 till the end of 2011 the credit crisis has raged in the world, as seen in Figure 2. Banks have created too much money too fast. They used this money to invest in financial markets and made the prices of houses grow by lending this money.

By lending this money, the price of houses will be pushed up along with the level of personal debt. On all the loans that banks give out, interest has to be paid. Because the debt rose quicker than the national income, people where eventually unable to pay their debt. People stop paying their debts and banks found themselves almost or completely bankrupt.

1.3 The researcher’s opinion

The first change that I think has to be made is; money has to be created by a transparent and democratic committee which works with the customers in mind.

In my opinion this power has to be controlled. Everyone should know how it is created and what is happening with their money. The government can’t be trusted with this power either. That’s why there should be an independent committee who works in the customer’s best interest and not against it. The government could work as a controlling power over this committee and protect the customer’s against misuse. There should also be a safety net which prevents too much or too less money creation.

The second thing that has to be changed in my opinion is; money has to be created debt free.

As stated before banks create money by creating loans. But when money is created by the state, with common interest in mind, and put in circulation by government spending in stead of the use of loans, this money would then stimulate the economy and create jobs.

The final change I would make is; banks will not be allowed to create money again.

History has shown that as soon as banks have the power to create money, they create too much in good times, which leads to financial crises, and create too few in bad times, which leads to recessions and unemployment. Keep in mind that banks want to make a profit.

So plain and simple, we can’t allow banks to create money.

Clinical associations of thyroid tumors with SNP309 in Iranian-Azeri population

Abstract

Background: MDM2 SNP309 (rs2279744) is a single nucleotide T>G polymorphism present in the first intron of the MDM2 gene and a negative regulator of p53 tumor suppressor protein. The findings suggest that MDM2 309TG polymorphism may be a risk factor for several cancers. This study examined clinical associations of thyroid tumors with SNP309 in Iranian-Azeri population.

Methods: In present study, 107 thyroid cancer patients and 156 cancer-free control were obtained from Iranian-Azeri population. Genomic DNA including of peripheral blood and tumor samples was extracted by salting out procedure. The MDM2 SNP309 genotyping was carried out by polymerase chain reaction-single strand conformational polymorphism (PCR-SSCP) assay. All analyses were conducted by spss software with Chi-squared(χ2) test and the P < 0.05 was used as the criterion of significance.

Results: Significant difference between genotype frequency distribution in control and cancerous group was found and our results showed that the genotypes containing G allele [TG (OR, 0.021; 95% CI, 0.018–0.024; p= 0.018) or GG (OR,0.01; 95% CI, 0.008–0.012; p= 0.007] compared with the TT genotype were associated with significant increased susceptibility to thyroid tumors.

Conclusions: All Our findings imply that the MDM2 promoter SNP309 (rs2279744) is associated with the incidence of Thyroid tumors in Iranian-Azeri population.

Key Words: Thyroid cancer,MDM2 SNP309 T>G, polymorphism

Introduction

The sequence of the whole human genome was completed in 2001 [1], and Approximately 6.5 million  SNPs (single nucleotide polymorphisms) have been detected in human genes. Depending on where a SNP occurs, it might have different results at the phenotypic level. SNPs are located in the coding regions of genes that alter the function or structure of the encoded proteins and in non-coding regions of the genome, and have no direct known effect on the phenotype of an individual. These differences could contribute to many of the individual features that describe us as unique.  Also, because they occur at a relatively high frequency in the genome (approximately one SNP for every 1000 bp), SNPs can be used as markers for these more important genetic changes. 89% of the analyzed SNPs are located in an exon and 11 % in an intron [2,3].

Thyroid cancer (TC) is the most common malignancy of the endocrine system and accounts for approximately 2.1% of all cancers diagnosed worldwide. The thyroid cancer has a 4.4% prevalence in women and a 1.3% prevalence in men. The male-to-female ratio was approximately 1: 3.5, while the crude incidence for men was 1.9/100,000 and that for women was 6.6/100,000. Thyroid cancer is the ninth most common cancer (2.1% of all cancers) in women [4]. The incidence rate of thyroid cancer in both women and men is increasing[5]. Primary thyroid tumors are classified as benign or malignant, which originate from follicular and parafollicular (or c-cells) epithelial cells. Benign tumors containing follicular adenoma and malignant tumors are contained papillary, follicular, medullary and anaplastic carcinomas. The follicular cells convert iodine into thyroxine (T4) and triiodothyronine (T3) and include papillary, follicular and anaplastic carcinomas and Follicular adenoma. The parafollicular or C-cells, which secrete calcitonin, contain medullary carcinoma [6]. Between  thyroid tumors, papillary thyroid cancer represent approximately 80% of all thyroid malignancies [7]. Some molecularbiomarkersinvolved inthyroid tumorsincludep53, RET, BRAF, RET/PTC ,RAS, PAX8/PPARγ  and NTRK1  [8].

The human homologue of the mouse double minute 2 (MDM2 or HDM2) gene locatedon chromosome 12q13-14 with 491 amino acids long and 12 exons consist of twotranscriptional promoters, constitutive promoter and p53-responsive intronic promoter [9, 10, 11]. MDM2 oncoprotein actsa critical regul-atory role for many tumor-related genes that are important for cell-cycle control such as the P53 [12]. The p53 gene is mutated in about 50% of all human cancers [13]. P53 is a tumor suppressor gene, which is involved in multiple pathways, including apoptosis, DNA repair, cell cycle arrest and senescence [14]. MDM2 and TP53 regulate each other through a feedback loop [15]. P53 induces MDM2 on the transcriptional level while MDM2 interacts through its N-terminal domain with an α-helix present in the transactivation domain of p53 with high affinity and inhibits its, As a result, prepares it for proteolytic degradation at the ubiquitination pathway [16]. The overall frequency of MDM2 gene amplification in human tumors is approximately 7.2% [17]. A recent study has shown that a MDM2 single nucleotide polymorphism in the first intron with a T to G change at the nucleotide 309 in the p2 promoter region of MDM2, so that, The presence of the mutant G-allele in cells containing SNP 309 GG  increases the affinity of the transcriptional activator stimulatory protein 1 (Sp1), that regulates the basal levels of MDM2 mRNA and protein in these cells not in T/T wild cells. These higher levels of mdm2 in cells with the GG SNP309 alleles reduce the p53 apoptotic responses that occur in people in response to DNA damage and other environmental threats while in cells with the TT SNP309 alleles can increase p53 protein levels after a stress signal. Thus, in some individuals with a G/G genotype at SNP309, the percentage of cells undergoing apoptosis or cell cycle arrest in response to genotoxic stress is low [18, 19]. The MDM2 SNP309 polymorphism has been associated with several cancers, including gastric carcinoma [20], non-small-cell lung cancer [21], endometrial cancer [22], Colorectal Cancer [23], Hepatocellular Carcinoma [24], and bladder cancer [25]. In contrast, no increased risk was observed for breast cancer [26,27], ovarian cancer [28], prostate cancer [29]. In the present study, the association between the MDM2 SNP309 polymorphisms and thyroid tumors risk in the Iranian-Azeri was examined.

Materials and Methods

Specimens study and collection

In this study our patient group including of 107 subjects who were diagnosed with thyroid cancer (age range: 14-81 and mean age: 39.3) were eligible for this study. All patients with histologically confirmed primary thyroid cancer. Control group  were selected randomly from 156 healthy subjects with no family history of cancer (age range: 19-79 and mean age: 40.9). A standardized questionnaire  from the control  group, including information on age, gender, family history of types cancer, smoking and alcohol consumption history was completed for everyone. Informed consent was earned from all participants. All cases and controls were ethnic Azari from northwest of Iran. The study protocol was approvedby the Ethics Committee of Tabriz University of the Medical Sciences research center (www.tbzmed.ac.ir/Research). Peripheral blood and tissue samples weretaken from patients who underwent surgery at Nour-Nejat and Emam-Reza hospitals of Tabriz-Iran, from 2008 to 2012.

DNA extraction and PCR amplification

Peripheral blood samples were kept in vials containing ethylene-diamine-tetra-acetic acid (EDTA), an anticoagulant. Genomic DNA was extracted from 5ml the whole blood mixed with anticoagulant using salting out procedure as described [30] and Then stored at – 20 until further use. The 194 bp fragment including of the T to G polymorphic site in the intronic promoter of MDM2 region was amplified using specific primers forward: 5′-CAAGTTCAGACACGTTCCGA-3′ and reverse: 5′-TCGGAACGTGTCTGAACTTG-3′. PCR was performed in a 25 µl reaction mixture containing 1μl template DNA (20-50ng), 2.5μl PCR buffer (10x), 0.5μl dNTPs (10mM), 0.75μl of each primers (10pmol), 0.85μl Mgcl2 (50mM), 18.45μl sterile distilled H2O and 0.2μl Taq DNA polymerase (5unit/μl), (Cinnagen, Iran). PCR amplification was carried out in a thermal cycler (Sensoquest, GmbH, Germany). The following cycling conditions were: an initialdenaturation at 95°C for 5 min followed by 35 cycles of denaturation at 95 °C for 30 s, annealing at 59°C for 30 s and  elongation at 72°C for 30 s and a final extension at 72°C for 10 min.

SSCP profiles

For sscp analysis, 4ml of the amplified pcr product added to 6ml of denaturing loading dye solution with an equivalent volume containing (95 % formamide, 10 mM NaOH, 20 mM EDTA, 0.05 % bromophenol blue and 0.05 % xylene cyanol). The solution was briefly vortexed and The total mixture were denatured by heating at 95°C for 10 min and each sample mixture was immediately snap-cooled on ice before loading onto the vertical electrophoresis set. 5µl of each pcr product sample are loaded onto a non-denaturing 10% polyacrylamide gel consisted of (5 ml acrylamide–bisacrylamide solution (40 %) (38:2), 3.5 ml Tris–Borate–EDTA buffer (TBE.5x), 13.5 ml deionized-distilled H2O, 300 µl ammonium persulfate (10 %, freshly prepared) and 30 µl tetramethylethylenediamine). Then gel was run in 0.6x TBE buffer for 15-17h under a constant voltage and temperature 100v cm/l and 4°c using a vertical electrophoretic apparatus  (Akhtarian, Iran) and a power supplier (Apelex, France). One of the undenatured PCR products as negative control and a 50-bp DNA ladder (molecular size marker; Fermentas, USA) were loaded into the gel wells. After electrophoresis, the gel was silver stained to the following way: The gel was immersed in a tray containing solution 1 (4ml absolute ethanol 10% and 2ml acetic acid 5% with distilled water to a final volume of 400 mL was reached; fixing solution) and the tray was placed on top of a shaker to mix for 10 minutes (this step was performed two times). Then the solution 1 was removed, and the solution is a newly built 2 (0.1% silver nitrate) was added for 15-20 minutes. After, the solution 2 was poured out and briefly was rinsed gel with deionized water. At the end, the freshly built solution 3 (3 gr NaOH, 20 ml formaldehyde 10% in 180 mL distilled water) was added for 20-30 minutes. Solution 3 was used to wash unstained silver off the gel. Finally, bands clearly were created as dark brown regions on the gel [31,32]. Each banding pattern in the sscp gel was sequenced in order to confirm and identify sequence changes using the forward primer (Applied Biosystems, 3730xl DNA Analyzer, Bioneer, Korea). Sequencing results that were obtained were compared with the sequence of MDM2 available with the reference sequence (NC_000012.12) in the NCBI  database (www.ncbi.nlm.nih.gov).

Statistical methods

At first, we assessed Hardy-Weinberg equilibrium (HWE) (http://ihg.gsf.de/cgi-bin/hw/hwa1.pl) for each study using Pearson’s goodness-of-fit chi-square in patient and control groups. Allele and genotype frequencies in patients and controls were compared by Pearson’s χ2-tests or Fisher’s exact testto determine whether there was any significant difference. Also crude odds ratios (ORs ) and 95% confidence interval (CIs) were used to assesse the association between MDM2 309T>G polymorphism and thyroid cancer risk. All statistical analyses were performed using SPSS software (v.16; SPSS Inc., USA) and p-Value < 0.05 were considered significant.

Results

Figure 1 depicts conformes of rs2279744 with distinct banding patterns in the region of interest, which was determined by sequencing (Fig. 2). The distribution of genotypes in the patient and control group were consistent with the Hardy-Weinberg equilibrium distribution (P = 0.54 and P = 0.30 ). The genotype distribution and allele frequencies of MDM2 promoter SNP309 polymorphisms between thyroid cancer patients and controls shown in table 1 and 2. As shown in Table 1, The genotype frequencies of MDM2 SNP309 (rs2279744) polymorphism among TT and GG homozygous and TG heterozygous individuals for the patient group were 29%, 18.7 and 52.3%, while in the healthy control group were 19.9%, 53.8% and 26.3%, respectively. The frequency of the wild-type allele T was in cases 44.7% (n=118) and in controls 55.3% (n=146). The frequency of the variant allele G was in cases 36.6% (n=96) and in controls 63.4% (n=166). Our results show that the MDM2 SNP309 TG/GG genotypes were associated between patients and controls compared with the SNP309TT homozygous with an increased risk of thyroid cancer. In addition, statistically significant difference did not observe in the allele frequency of MDM2 SNP309 between patient and control group. Furthermore, we also evaluated the association between the MDM2 SNP309 polymorphism and clinicopathological characteristics of thyroid cancer, including of age at diagnosis, tumor type, tumor size, gender, side involved, tumor stage and lymph node involvement but no significant difference was found.

Precision farming in the UK: custom essay help

History of precision farming within the UK

Precision farming has been around for a very long time. It stems from farmers sectioning of pieces of land into farmable areas. These areas would be defined by hedges or ditches. These sections of land would be uses to isolate specific areas of land with differences in soil types and qualities.

Precision farming now takes on a different meaning but still returns to the main reason for its use of targeting specific area in a field to gain the most from it. For most people precision farming is the use of GPS steering on tractors but it also now takes on other forms such as nutrient mapping and controlled traffic management.

Environmental impacts of using precision farming

Agriculture is an industry that has a duty of care to the environment and all living things found in the diverse habitats found on the farm. Precision agriculture has been targeted by many farms to help the environment. An application where this has been seen are in the application of pesticides.

Precision farming technology has allowed farmers to improve the application of pesticides and has allowed farmers to correctly target pests and diseases. The main reduction of environment impacts has been due to new technology such as GPS controlled section control on sprayers. This technology has been designed to overcome human error.

Applications of precision farming within agriculture

There are three main areas that we break precision farming down to. These are assessing variation, optimising operations, varying applications and operations.

Assessing variation is used in modern farming as an aid to agronomy. In the past agronomists would walk fields and asses growth and how crops are being effected by conditions. They would also advice farmers on the best practices that may affect how a farmer would carry out specific tasks on the farm. This is very expensive and time consuming. Precision farming equipment now allows these tasks to be conducted more accurately over greater areas of land in reduced time.

There are many ways this can be done but the most well-known uses have been in the forms of satellite imagery, tractor mounted sensors and drones. These pieces of equipment allow you to carry out tasks such as soil scanning and yield mapping. They can also be used to determined plant growth variation. All of the data that can be taken from these tests are used to make informed decisions on current crops and allows farmers to forward plan for future cropping.

One main area that needs to be assessed for next year’s crop is available nutrient. Nutrient mapping allows farmers to map gradual changes in nutrient availability in the soil. This then allows farmers to apply variable rate fertilisers to specific areas of the field. Once this has been done there should be the correct amount of available nutrient for crops to thrive. The two main benefits for this application of precision farming are reduced wastage of fertiliser and also not under applying therefor capping the capability of the crop and overall yield. Given these two benefits this allows a farm to produce better yields at reduced cost.

Once variation assessing has been done then you can look as varying applications and operations. The aim of this is to minimise inputs into a field but to maintain the correct amount of products to allow for the best crops. There are many operations that can be varied to reduce input.

Operations which can be varied include:

Variable rate drilling – seed rates can be varied to coincide with soil mapping. For example reducing seed rates in the most fertile soils and increasing in harsher soils.

Variable rate fertiliser – sensors on tractors can scan the crop canopy and measure green crop area. This gives you a green area index. These numbers then feed into the spreader control box and can apply different amounts to allow plant to use fertilisers more efficiently.

Target spraying and variable rate – drones can be flows over crops to find areas of where weed burdens are high. These areas can then be treated specifically. By using technology to improve pesticide accuracy you are also helping reduce environmental impact. There is also a reduction in the amount of chemical being used bringing financial savings to farmers.

Optimising operations is the main technology that directly affects the operator. Mainly found in the form of steering guidance and auto shut offs. Most smart operations use Global Navigational Satellite systems.

Auto Guidance – ensures operations are as efficient as the system will allow. Typically ensuring operations are, at their most accurate, within 2cm on each pass, whether within one operation (pass to pass) or throughout the growing year (controlled traffic).

Auto shut off – using precise locations of equipment signals can be sent bringing implements into or out of operation far quicker, more precisely and more consistently than human operators. Both for applications (drilling, spraying and spreading) as well as data recording (yield mapping, nutrient sensing, soil scanning).

The costings and savings of GPS assisted steering

Companies that look to use assisted steering must consider a variety of factors before buying these packages. The main factor is the economic impact to the business.

Economic factors directly influence the profitability of the company. This can be measure by comparing input costs, savings and gross profit. Nix (2015) believes that the cost of running 3 machines on Real time kinetic (RTK) assisted steering is £20/ha giving a net profit of £2/ha. This is and initial profit made by reduced diesel costs and full area cropping.

There can be further gains by using this system in conjunction with other farming practices by creating lower input costs. These saving can be found by reducing the amount of fertiliser, sprays, labour and seed used on the farm.

The effects of precision farming on employers and employees

 

Timber framed buildings vs traditional brick built structures

Introduction

Looking at the timber framed buildings comparing with traditional brick built structures

Assessing   the sustainability, costs, build time, sourced materials, and which is possibly suited to the UK climate.

Literature review

For many years there has being many diverse methods of construction brick being the modern method of construction especially in the residential housing sector.  This form of construction offered a strong, long-lasting dwelling which will supposedly last for many years to come (figue1)

 

Today the brick built house has competition from other methods of construction, timber framed is widely used by many construction companies and is seen as a more time and cost effective construction method Timber frame housing is not a new concept. (Anon., 2015). (Figure 2)

 

Throughout history it was one of the most commonly construction materials and examples of timber framed houses from the 12th century are still used today. Softwood based timber housing systems date back to the 1780’s with many examples positioned along the East and Southern coasts. Of the uk  So why change   about that we no longer see timber framed construction as a sustainable form of construction , over 70% of  houses in the world are constructed using this method Great Fire of London put increased pressure due to rebuilding works and the preferred timber used was oak and took many years to grow .  The result was a change of attitude and timber took a burner for a while hence now this method is now seen by many as the way forward in the construction of houses.  This proposal is set out to ultimately discover weather timber frame provides a time effective solution to other traditional methods of construction.

The more extreme the climate, we have to rely on the building to protect us from the weather the demand for more sustainable energy efficient  homes is extremely important today. There is a strong case to investigate why the timber frame approach is the way to go but there are other issues like are supply of timber most our timber is imported into the uk from all over the world could we address this problem with  with proper management of our own  forests,  according to the forestry commission the uk  import  – 6.4 million cubic metres of sawn wood (+17%); – 3.3 million cubic metres of wood-based panels (+10%); – 10.7 million cubic metres of wood pellets (+45%); – 7.3 million tonnes of pulp and paper (+1%); – The total value of wood product imports was £7.2 billion (+7%), (+1%). (Darot, 2014)

UK Import Value

(Darot, 2014)

UK Export Value

(Darot, 2014)UKCIP conducted a piece of work in 2009 to project how the climate may change in

2020, 2050 and 2080 and below shows how they project the weather could change

over the next 70 years in North Yorkshire.

2020 Yorkshire and Humber Climate Projections

The estimate of increase in winter mean temperature is 1.3ºC.

The estimate of increase in summer mean temperature is between 1.3 – 1.4ºC.

The estimate of increase in summer mean daily maximum temperature is 1.7 –

1.8ºC.

The estimate of increase in summer mean daily minimum temperature is 1.5ºC.

The estimate of change in winter mean precipitation is 5%.

The estimate of change in summer mean precipitation is 6% – 5%.

6

2050 Yorkshire and Humber Climate Projections

The estimate of increase in winter mean temperature is 1.9ºC – 2.5ºC.

The estimate of increase in summer mean temperature is 2.2ºC – 2.6ºC.

The estimate of increase in summer mean daily maximum temperature is 2.9ºC –

3.5ºC.

The estimate of increase in summer mean daily minimum temperature is 2.4ºC –

2.9ºC.

The estimate of change in winter mean precipitation is 9% – 12%.

The estimate of change in summer mean precipitation is –15% -18%.

2080 Yorkshire and Humber Climate Projections

The estimate of increase in winter mean temperature is 2.5ºC – 3.6ºC.

The estimate of increase in summer mean temperature is 2.5ºC – 4.2ºC.

The estimate of increase in summer mean daily maximum temperature is 3.4ºC –

5.6ºC.

The estimate of increase in summer mean daily minimum temperature is 2.8ºC –

4.7ºC.

The estimate of change in winter mean precipitation is 12% – 20%.

The estimate of change in summer mean precipitation is –17% -28%.

6. Summary of key risks to North Yorkshire County Council from a

changing climate

Regional and local climate data has been collated to provide likely scenarios for key

services (receptors) and is presented in table 2 below.

Severity and likelihood of incidents was scored by service representatives during the

interviews and workshops and these have been multiplied to give the colour-coded

level of risk. The table gives the service type, the likely impact and consequences of

future climatic conditions and a risk rating for now, 2020, 2040 and 2080.

The risk of negative issues is quantified as follows:

1-9 = low (green)

10-15 = medium (amber)

16-25 = high (red)

The opportunity for positive outcomes is indicated as follows:

Pale blue indicates a low level of opportunity

Dark blue indicates a high level of opportunity

Could these factors have a negative effect on the production of timber frame structures due to our high import levels, making it more expensive?   Or are there more telling factors that drive the publics’ opinion in to choosing the brick method of construction over timber. Sustainable homes clarifies that timber is possibly the only real renewable source used within the construction industry Timber construction it’s the only environmentally friendly material available claiming low carbon neutral properties the manufacture of wood products normally requires less energy than that of its competitors manufactured brick has to be made, stone quarried, and cement quarried and mixed producing large amounts of co2 into our atmosphere contributing to global warming.

Timber Frame Timber frame systems have always been popular with self-builders because they are fast and convenient. With the main components assembled in the factory and transported to the site, reducing waste another benefit is one company deals with everything concerned with the new build. However Many self-builders also like the idea where the main frame is constructed from a sustainable resource where the timber source is guaranteed to have come from renewable sources, carefully managed forests. However, if timber frame is your choice you should remember that the law of diminishing returns applies: the energy savings achieved by super-insulated walls are surprisingly small and should be balanced against not just installation costs but loss of internal floor space in situations where the planners dictate a maximum area for the footprint of the house.

The benefits of timber framed construction

Quick build times,  reducing site costs , quick and easy to weather proof  allowing the early introduction of other trades low energy usage having used locally produces materials, incorporate the use of recycled materials for example  hard-core, timber, and  slates.  There’s also the benefit of reduced waste due to the fact that most of the structures are factory made, the structure also if designed right will last well beyond its 60 years life span the end product produces high energy efficiency fast to heat engineered to the exact measurements and can still use traditional procurement methods

The disadvantages of timber frame construction

There are a lack of building teams that can build these structures Lack of experience in construction methods, there is transportation and carriage cost, there is also the problem of leaving the structure open to the elements which could then lead to future decay issues, all timber used should be adequately fire protected.

Advantages of Masonry Walls

Masonry built properties regarding skills and materials are readily available in the UK.  Masonry cavity walling is almost certainly the cheapest system for your new self-builds – although the difference is marginal on a one-off house and so relative cost should be considered in the context of the other pros and cons

Masonry built structures gives a building a feeling of solidity, as the  blocks provides a high level of acoustic mass, helping to eradicate noise outside the building. Building internal stud partition walls from masonry, as opposed to timber stud walls covered with plasterboard, will further enhance the feeling of solidity and provide sound proofing between rooms. The high strength of masonry walls allows the option to use concrete upper floors, r than conventional timber floor joists. This provides low acoustics between storeys, as well as making it possible to build first floor partition walls in solid masonry, rather than in timber studwork, extending the qualities of solidity and sound deadening to upper rooms. Dense blockwork also provides a solid fixing for built-in furniture, kitchens wardrobes curtain rails, pictures etc. With the addition of steel, however, ordinary dense concrete blockwork is a poor insulator and so in order to meet the energy requirements, insulation has to be added into the wall structures. It is possible, however, to achieve extremely high levels of energy efficiency using masonry construction. One way of improving the thermal performance is to use lightweight concrete blocks – also known as air Crete. These have a proportion of air added into the mix manufacture, creating tiny air bubbles which act as an insulant. The disadvantage is that the more air that is added into the blocks, the weaker they can become. This can be a big problem when it comes to fixings are often necessary to fix heavy furniture or curtain rails.

Masonry structures are the oldest structures. These are structure built by using masonry units with mortar. The masonry units may be:

 Clay Bricks

 Concrete

 Stone

Bricks

Brick is a solid unit of building having standard size and weight. Its history traces back thousand years. Clay bricks made of fired clay. The composition of clay varies over a wide range. Usually clays are composed mainly of silica (grains of sand), alumina, lime, iron, manganese, sulphur, and phosphates, Clay bricks have an average density of 125 pcf. Bricks are manufactured by grinding or crushing the clay in mills and mixing it with water to make it plastic. The plastic clay is then moulded, textured, dried, and fired

Bricks are manufactured in different colours, depending on the fire temperature during manufacturing. The firing temperature for brick manufacturing varies from 900°C to 1200°C (1650°F to 2200°F)

 Concrete Blocks

 Structural Clay Tiles

 Stone

1. As a Structural build Unit

Since the clay bricks or burnt bricks are strong, hard, durable, and resistive to abrasion and fire.

 Buildings

 Bridges

 Foundations

 Arches

 Pavement (Footpath, Streets)

2. as an Aesthetic Surface Finish

Bricks can be used in different colours, sizes and orientations to get different surface designs.

 In Pavements

 As Facing Brick

 For Architectural Purposes

3. As a Fire Resistant Material

Advantages of Bricks

 Economical (Raw material readily  available)

 Hard and durable

 Compressive strength is good  for basic construction

 Different orientations and sizes give diverse surface textures

 low maintenance cost is required

 Demolishing of brick structures is very easy, less time consuming and hence economic

 Recyclable

 fire resistant

 Produces less environmental pollution during making

Disadvantages of Bricks

 Time consuming

 Cannot be used in high seismic zones

 Some bricks absorb water easily.

 Very Less tensile strength

 Rough surfaces of bricks may cause mould growth

 Bricks can discolour over time.

References

http://www.cyprus-property-buyers.com/

http://www.aboutcivil.org/

http://www.the-self-build-guide.co.uk/masonry-construction.html

http://thisisbuildingmaterials.blogspot.co.uk/2012/03/pros-and-cons-of-brick-and-block.html

History of ras Al khaimah: essay help online free

Introduction

The Emirate of Ras Al Khaimah is one of seven emirates which together contain the alliance of the United Arab Emirates (UAE). The alliance is built up on December 2, 1971, which included Abu Dhabi, Dubai, Sharjah, Ajman, Umm Al Qawain and Fujairah, where Ras Al Khaimah has joined by February 1972. Ras Al Khaimah is viewed as one of the UAE’s most grounded developing tourism spots. Offering untainted common magnificence and dazzling view, Situated on the west bank of the United Arab Emirates, Ras Al Khaimah’s 64-kilometers of shoreline give magnificent sandy shorelines and clear precious stone blue waters. Guests to the emirate can appreciate an abundance of open air exercises from swimming, angling and playing golf and also investigating the desert by safari and encountering a conventional Bedouin desert camp. To get away from the warmth, Ras Al Khaimah additionally has an Iceland Water Park which offers a large group of water slides and diversion for all the crew. Because of its topographical area at the northern tip of the United Arab Emirates, Ras Al Khaimah is truly meant ‘top of the tent’ and initially known as ‘Julfar’. As a rich archeological destination, Ras Al Khaimah highlights an abundance of scenes from tough mountain crests to waterfront zones and betray. To the far north, the rising emirate outskirts with the Sultanate of Oman, where the sheer rough Hajar Mountain inclines seem to ascend out of the ocean. The atmosphere is sub-tropical, semi-bone-dry with warm temperatures, rare downpour fall and blue skies a large portion of the year. In the middle of November and April the normal day-time temperature achieves 28 degrees centigrade though in the mid-year months in the middle of June and August temperatures ascend above 35 degrees centigrade with abundance moistness.

Contents:

1. History of Ras Al Khaimah

2. Ubaid Period (5000 – 3800 BC)

3. Hafeet Period (3200 – 2600 BC)

4. Umm al-Nar Civilization (2600 – 2000 BC)

5. Wadi Suq Culture Period (2000 – 1600 BC)

6. Iron Age (1250 – 300 BC)

7. The Hellenic and Parthian Era (300 BC -300 AD)

8. The Sasanian Occupation Era (300 AD – 632 AD)

9. The Abbasids Era (750 – 1’250 AD)

10 The Later Islamic Era (14th – 19th century)

History of Ras Al Khaimah

The History of the Ras Al Khaimah Ruling Dynasty The late Sheik Saqr canister Mohammad al Qasimi was one of the world’s longest-serving rulers, having picked up force as Emir of the Gulf condition of Ras Al Khaimah in 1948 until he kicked the bucket in October 2010 matured 92. Conceived in 1918 in Ras Al Khaimah and instructed locally, Sheik Saqr turned into the 6th al Qasimi Ruler of Ras Al Khaimah, who drove his kin from the period of pearl angling into the flourishing of the 21st century. As oil fares from Abu Dhabi developed in the mid-1960s a system of help to the poorer Trucial states was built up, and Sheik Saqr assumed a huge part in guiding the stream of help toward the northern emirates. He was likewise personally included in the discussions prompting the arrangement of the UAE and later Ras Al Khaimah joined as the seventh Emirate of the united Middle Easterner emirate in February 1972.

Its rulers were:

• Sheikh Rahma Al Qasimi: 1708–1731

• Sheikh Matar bin Butti Al Qasimi: 1731–1749

• Sheikh Rashid bin Matar Al Qasimi: 1749–1777:

• Sheikh Saqr bin Rashid Al Qasimi: 1777–1803

• 1803–1808: Sheikh Sultan Bin Saqr Al Qasimi (died 1866) (1st time)

• Sheikh Hasan bin `Ali Al Anezi: 1808–1814

• Sheikh Hasan bin Rahma: 1814–1820

• Sheikh Sultan Bin Saqr Al Qasimi: 1820–1866:  (2nd time)

• Sheikh Ibrahim bin Sultan Al Qasimi: 1866 – May 1867

• May 1867 – 14 April 1868: Sheikh Khalid bin Sultan Al Qasimi (died 1868)

• 14 April 1868 – 1869: Sheikh Salim bin Sultan Al Qasimi

• 1869 – August 1900: Sheikh Humayd bin Abdullah Al Qasimi (died 1900)

• Sheikh Salim bin Sultan Al Qasimi: 1909 – August 1919

• Sheikh Sultan bin Salim Al Qasimi August 1919 – 10 July 1921

• Sheikh Sultan bin Salim Al Qasimi: 10 July 1921 – Feb 1948

• Sheikh Saqr bin Mohammad al-Qassimi: (1918–2010) 17 July 1948 – 27 October 2010.

• Sheikh Saud bin Saqr al Qasimi : 27 October 2010 – current

The appointed heir presumptive is currently Muhammad bin Saud al Qasimi, son of the current Ruler of the Emirate.

Taking after the death of Sheik Saqr in October 2010, Sheik Saud receptacle Saqr Al Qasimi was delegated Supreme Council part and Ruler of Ras Al Khaimah. Conceived in Dubai in 1956, Sheik Saud finished both his essential and optional training in Ras Al Khaimah and later finished his studies in financial matters in the University of Michigan, USA. The Emirate of Ras Al Khaimah is a support for the antiquated human advancement. It has a great archeological legacy and rich social history. National Museum of Ras Al Khaimah has numerous landmarks going back a great many years prior. The archeological discovers demonstrate that the old history of Ras Al Khaimah went through numerous critical periods:

Ubaid Period (5000 – 3800 BC)

This is the most established time known so far in the historical backdrop of Ras Al Khaimah. Not a long way from Al Jazeerah Al Hamra, immense vestiges of structures and outside rooftops notwithstanding some earthenware remains have been found. The ceramics remains looked like stoneware and pottery pots found in Mesopotamia in the same period. These remnants and earthenware remains are characteristic of the early human exercises here.

Hafeet Period (3200 – 2600 BC)

This period was known for its vestiges of graves and cemetery which were based on high mountains. They were made of nearby stone and formed like bee sanctuaries. Every grave comprised of maybe a couple little rooms. These were found in the ranges of Khatt, Wadi al-Bih and in addition in Wadi al-Qarw.

Umm al-Nar Civilization (2600 – 2000 BC)

The Umm al-Nar Civilization existed amidst the third thousand years BC. The period is surely understood for its round graves whose outside dividers were fabricated of smooth engraved and cleaned stones. Confirmation proposes that exchange in the middle of Mesopotamia and the Valley of Indus (south-east of Iran) prospered amid the period, which was surely understood as Majan.

Wadi Suq Culture Period (2000 – 1600 BC)

Numerous graves were found in Ghaleelah, Al Qirm, Al Rams, Qarn Al Harf, Khatt and Athan. A large portion of the Wadi Suq graves were tremendous and were manufactured over the ground. Their establishments were constructed of limestone. The individual possessions and remainders found in these graves are at present in plain view in the Ras Al Khaimah National Museum.

Iron Age (1250 – 300 BC)

The Iron Age here is best known from finds from the southern piece of Ras Al Khaimah, particularly in Wadi Alkor, Wadi Muna’i, Fashkha, and Wa’ab, where various graves were found. Some of them were elongated with four rooms, others were molded like a horseshoe and a few others were round fit as a fiddle found in Naslah. A standout amongst the most huge revelations was a stone with the drawing of a phoenix engraved on it. The drawing of this fanciful winged animal looked like those painted in Assyrian royal residences in Northern Iraq.

The Hellenic and Parthian Era (300 BC – 300 AD)

The later pre-Islamic time, the Hellenic and Parthian Period, is clear in the northern and southern parts of Ras Al Khaimah. Study ventures have prompted the revelation of some verifiable locales in the northern and southern areas of Ras Al Khaimah. These destinations incorporate individual tombs and reused old graves found in Shamal, Asimah, Wa’ab and Wadi Muna’i.

The Sasanian Occupation Era (300 AD – 632 AD)

A group of archeologists have established a little site on the island of Hulaylah that was involved amid the Sasanian Period. As of late two different destinations were found in Khatt. The most noteworthy revelation of this time amid the three-stage investigation crusade was a Sasanian fortress. It was constructed primarily to have full control over the ripe fields in the north of Ras Al Khaimah. This landmark was cleared when Islam was embraced in the UAE range.

The Abbasids Era (750 – 1’250 AD)

There are two zones in Ras Al Khaimah which helped it to assume an incredible part as a clamoring exchange course in the early Islamic Era. One of these spots was Al Khoush which was a manor surrendered by the Sassans amid the Islamic development around there. The second place is arranged in the Island of Hulaylah. It was a structure made of palm takes off. Both the destinations were known as a piece of Julfar, which was an old town understood to Muslim explorers and geographers, for example, Al Maqdisi in the tenth century, and Al Idrisy in the twelfth century. Some Abbasid ceramics and Chinese porcelain pots imported from Iraq and somewhere else were found in these two ranges. The artifacts demonstrate to us how far individuals of Julfar were profoundly intrigued and included in exchange around then.

The Later Islamic Era (fourteenth to nineteenth century)

Amidst the fourteenth century, Kush and the Island of Hulaylah were forsaken. Individuals started to settle on sandy shorelines close to the coast. This range was called Julfar. It was found by the renowned excavator Piatris in 1968. Numerous archeological endeavors were assigned to the territory. They all demonstrated that Julfar was an endless populated zone from the fourteenth up to the seventeenth century. Julfar was celebrated for its incomprehensible and thriving exchange with inaccessible regions. The finds of porcelain and earthenware from here were transported in from Arab and European nations.

Main 18 Herbs that Promote Long Life: custom essay help

Main 18 Herbs that Promote Long Life

Of the many eatable herbs, there are a few that can be arranged as “life delaying.” While a portion of the better-known herbs have a notoriety for boosting the digestive or safe framework, or for peopling turn out to be all the more rationally ready or have better vision, there are some capable herbs that particularly work to forestall genuine sicknesses and diseases, especially in the region of heart wellbeing and general cardiovascular wellness.

A general survey of the therapeutic proof to bolster cases made for herbs was distributed in WebMD. It examined a hefty portion of the courses in which herbs help for all intents and purposes each known segment of the human body, from cardio wellbeing to mind capacity. Remember that numerous herbs have been around for more than a thousand years as solutions for regular afflictions. Garlic and Echinacea are only two samples of herbs that have a long, rich history as mending operators.

Any homegrown plant that can improve the heart work or simply accomplish a superior condition of wellbeing can undoubtedly draw out a man’s life. Heart assaults are a standout amongst the most well-known reasons for death in industrialized countries. In like manner, herbs that work to support the resistant framework can keep the body by and large more sound and go far toward expanding life span. Here is a brief diagram of the 18 best-known herbs that can add years to a man’s life, either through better heart wellbeing or by tending to genuine ailments and the insusceptible framework. The name of every herb is trailed by a short portrayal of its therapeutic impacts:

1. Garlic

Garlic has been utilized for quite a long time to control hypertension and to anticipate coronary illness. Furthermore, a few individuals use it in view of its microbes murdering properties and suspected part as an approach to keep certain sorts of growth.

2. Hawthorn

This herb attempts to bring on the lower circulatory strain, lessen heart rate and open the conduits. It is accounted for that hawthorn takes a while before the advantages get to be evident.

3. Guggul

A cholesterol-particular herb that artificially decreases “terrible cholesterol” even before it can achieve the blood.

4. Horse chestnut

This age-old cure, long prominent in Europe, reinforces veins and vessels, consequently decreasing swelling. More advantageous veins are an inside and out type of assurance against a wide exhibit of heart afflictions.

5. Cinnamon

This heavenly little herb is known not the circulatory framework in different courses, essentially by lessening high glucose levels and controlling cholesterol.

6. Dandelion

Since quite a while ago dearest in the West for its utilization in a home-blended wine, dandelion is a viable approach to getting circulatory strain under control. The concoction component by which this procedure works is fascinating: Dandelion really diminishes the measure of blood in the body, accordingly eliminating general circulatory strain.

7. Angelica root

Individuals who have feeble hearts frequently take angelica root as a cure. For a considerable length of time, this strangely named root has fabricated a notoriety for being a general strengthener of the circulatory framework and particularly the heart itself.

8. Coriander

Working similarly as angelica root, coriander is another herb that works from various perspectives, and through a few concoction procedures, to invigorate and enable the human cardiovascular framework.

9. Cayenne

This surely understood herb is frequently used to moderate the rate of seeping in mishap casualties. It likewise works by making veins more versatile. Subsequently, cayenne is a compelling operator for building up the ordinary circulatory strain.

10. Motherwort

This herb has been around for quite a long time, however just today are researchers figuring out how it functions. It contains an uncommon alkaloid by the name of leonine, which specifically unwinds the heart muscles when taken in remedial dosages.

11. Gynostemma

A powerful herb for mending wounds and making the heart more grounded. It additionally influences the whole circulatory framework in valuable ways.

12. Echinacea

A universally prevalent approach to battle diseases, kill unsafe microscopic organisms, and reinforce the insusceptible framework. Echinacea is one of the smash hit home grown supplements in retails stores in the U.S. what’s more, Europe.

13. Astragalus

Well known among the old occupants of China, this herb is an effective operator that fortifies the insusceptible framework.

14. Elderberry

Infectious flu is a genuine worldwide issue, particularly in immature countries. Elderberry keeps this season’s flu virus and is a compelling approach to keep its spread inside of huge populaces.

15. Andrographis

Alright, maybe not a critical issue, but rather this difficult to affirm herb is a capable cool cure.

16. Kelp

An old solution for diseases of different sorts, kelp is a standout amongst the most usually expended herbs on earth, particularly in Asia. The key to its energy is high iodine content.

17. Yucca Root

Joint inflammation sufferers have long known the help furnished by this root with the bizarre sounding name.

18. Bramble

Joint inflammation and feed fever, and also paleness, are the essential diseases treated with bramble.

Herbs have been in presence longer than people, and these old plants give significantly more than a wellspring of sustenance enhancing. It is imagined that a percentage of the most punctual human groups devoured herbs and beverages produced using them to treat different diseases. Presently, therapeutic science is investigating the definite synthetic procedures that make specific plants and herbs virtual healers among the world’s verdure.

Whatever one’s particular explanation behind taking herbs, a standout amongst the most critical is the dragging out of human life. Herbs assume an imperative part in the life of our species and will probably keep on doing as such for whatever length of time that individuals meander the planet.

Stalling of aircraft

The stalling of an aircraft has always been a key feature in flight history. However, what makes an aircraft stall? What methods are being used to determine when a stall may occur? And what is being done to prevent a stall?

This report will discuss aspects of the stall. These include:

• Defining what a stall is

• Boundary layer

• Aerodynamic and control characteristics

• Methods of sensing a stall

o Control surface buffet

o Stall vane

o Stall strips

o Suction activated horn

o Angle of attack sensor and stall warning computer

o Washout

o Vortex generators

• Stall warning and protection

o Stick shaker

o Stick pusher

o Flight deck indicators

DEFINITION OF A STALL

A stall is where an aircraft is climbing at an angle where it can produce no more lift, therefore will result in a stall and the aircraft will fall from the sky. This happens when an angle of 15⁰ has been reached and the airflow across the aerofoil has reached its maximum, this is called the critical angle of attack. The normally smooth airflow on the upper surface of the wing has now become turbulent and chaotic which reduces the lifting characteristics very quickly. As this lift quickly decreases, drag increases significantly.  If the airspeed is too low and will not support the aircraft then the stalling speed takes place. This will vary with the shape of the wing and the position of the flaps. Stalls are more likely to happen when performing a maneuver such as a steep bank. This happens because the aircraft will exceed its critical angle of attack (Figure 2). The leading edge of the wing is crucial and needs to be cleared of any contamination, such as ice. Having ice on the leading edge can greatly reduce the lifting characteristics and the critical angle of attack will be reduced, making it easier to stall. A number of stalls can occur during flight, these are:

• Secondary stall – If the first stall has not been fully recovered, a secondary stall may occur or may cause the aircraft to spin. Secondary stall will result if the exit of the stall is too fast before there is enough flying speed to pull out of the stall. To exit the stall, the elevator back pressure should be released, this will then allow the aircraft to naturally recover from the stall and the aircraft will return to steady level flight.

• Cross-Control stall – If a steeply banked turn is overshot from the centerline of the base to the final it tend to result in a stall. This usually happens if the pilot rushes to exit from a stall, usually by using rudder control to turn the aircraft faster. However, this requires the pilot to use the aileron to hold this banking angle which results in the nose dropping and requires the pilot to apply back pressure on the control column.

BOUNDARY LAYER

The boundary layer was first discovered by Ludwig Prandtl, a German engineer who studied aerodynamics. His theory was that there is a very thin layer of air flowing over an aerofoil. The layer directly on the surface, the molecules are motionless. The airflow outside of this area of stagnant air moves faster. Airflow at the top of the boundary layer moves at the same velocity as the airflow outside the boundary layer, this is known as the free stream velocity. The speed within the boundary layer depends on the shape of the wing, angle of attack and viscosity,

As the airflow hits the aerofoil, the first point of contact is the stagnation point. It then flows round the cambered aerofoil to the laminar flow. This section is where the airflow is uniform and organized and creates a suction point which creates lift depending on the camber of the aerofoil and the angle of attack. As the flow continues it reaches a section called the transition point where the laminar flow becomes turbulent flow which is chaotic. The next stage is the separation point, this is where the turbulent flow leaves the surface of the aerofoil and becomes the wake (Figure 3).

As the aerofoil changes in the angle of attack, it can affect the boundary layers characteristics. As the aircraft gets closer to the critical angle of attack, which is 15⁰, the aircraft begins to experience a change in pressure along the upper surface of the wing. The pressure changes from the ambient pressure at the leading edge of the aerofoil to a lower pressure on the surface or the aerofoil and returning to ambient pressure at the trailing edge. Flow separation should occur when low to high pressure (adverse pressure gradient). However, if the pressure gradient becomes too large, then the pressure will overcome the airflow forces. This results in the flow leaving the aerofoil. If the aircraft keeps exceeding the angle of attack, the pressure gradient will increase and will cause the aircraft to lose its loft characteristics all together (Figure 4).

AERODYNAMIC AND CONTROL CHARACTERISTICS

As an aircraft stalls, it will not necessarily stall along the whole wing but certain sections of the wing. Wing tip stall is feature where the tips of the wing stall first, however this causes a significant issue of no aileron control. Having no aileron control means that if an aircraft does stall, there will be no way to maneuver the aircraft out of the stall. This type of stall generally happens on swept wing aircraft. Along with the reduction in lateral control, the centre of pressure would significantly more rearwards causing a nose up pitching moment which is not desired if a stall does occur. If a root stalls first on an aircraft, the pilot will be able to sense the turbulent airflow hitting the horizontal stabilizer b ut it does not have the same effect for tip stalls.

As a tip stall occurs, it results in the centre of pressure moving forward and the aerodynamic centre of the aircraft. If the aerodynamic centre is in front of the centre of gravity then a nose up pitching moment will take place without and horizontal stabilizer control (Figure 5)

CONTROL SURFACE BUFFET

One method of a stall warning is the control surface buffet.  As the aircraft nearly reaches the critical angle of attack, the flow of air on the upper surface of the aerofoil will become turbulent and lose its smooth flow. It loses its flow around the aerofoil at the separation point as seen in the boundary layer. As a result, if this turbulent flow goes across the horizontal stabilizer, buffet occurs. This buffet is a stall warning.

STALL VANE

A stall vane is a little tab that sits on the leading edge of the aerofoil which is within the boundary layer. It sits at the stagnation point of the boundary layer, this is an area of low pressure. As the angle of attack changes, so does the stagnation point. If the angle of attack increases, stagnation point moves rearward and decreasing angle of attack moves stagnation point forward,

Therefore, this device is used to inform the pilot what the angle of attack is in relation to the stagnation point and if the aircraft is near stalling (Figure 6)

The spring loaded vane points into the airflow on the lower surface of the aerofoil, where the stagnation point is located. The airflow at the stagnation point pushed the vane, which pushes the spring in the direction of the stagnation point.

As a stall occurs, the speed will decrease and the stagnation point will move rearwards which will trigger the vane to move forward. The forward movement triggers a switch that sends a warning signal to the cockpit so the pilot can make the appropriate maneuver.

STALL STRIP

Stall strip uses the principle of the boundary layer to aid in preventing a stall. As the aerofoil reaches a high angle of attack, the stagnation point will be under the aerofoil. Therefore the air will flow around the upper surface. Without this strip, the oncoming airflow will stay attached to the aerofoil. As the stall strip has such a sharp edge, the airflow can no longer stay attached as easily and begins to separate from the aerofoil before the aerofoil reaches the critical angle of attack. Due to this, an early stall happens directly behind the stall strip before the full surface of the wing stalls.

These stall strips are places close to the root of the wing and are a small device. The reasoning behind placing the strips at the root is to have root stall. Root stall is where it is more desirable to have a stall as this will be the first area to stall and having use of ailerons in this situation is desirable to pull the aircraft out of the stall (Figure 7).

Having these add another benefit as the root stalls first, buffet warning will also happen sooner. The pilot will get these warning much quicker.

SUCTION ACTIVATED HORN

The suction activated horn is another method of detecting when a stall is about to occur. As with most of the stall warning systems, this horn is located on the leading edge of the wing. Pressure difference over the wing will trigger the horn to activate. When the aircraft approaches a stall and the stagnation point moves under the leading edge which causes a pressure reduction on the upper surface. This lower pressure passes over the horn port which senses the negative pressure and triggers the horn activation.. As the angle of attack increases, the horn will get louder as it gets closer to the critical angle of attack. It will supply a warning horn to the cockpit.

ANGLE OF ATTACK SENSORS AND STALL WARNING COMPUTER

The angle of attack sensor or probe, works be sensing the direction of airflow. It is a small device that sits outside of the aircraft on the fuselage. The sensor is continually driven to detect the pressure between the upper and lower surface. Angle of attack probe is a mini aerofoil which will act as the same as a wing but on a very smaller scale. It senses the direction of airflow and the angular position of the probe is connected to an electric output which is where the warning computer comes in (Figure 8).

WASHOUT

Washout is a decrease in the angle of incidence on the wing. This means that the root of the wing and the tip are not on the same straight level, the wing will look twisted. The purpose of this is to make sure the root of the wing stalls before the tip. Having the root stall first is desirable as the aircraft will have the use of the ailerons to control the aircraft and move the aircraft out of the stall before it worsens. It also provides extra control from spinning. In addition to  reducing stall, washout also provides a reduction of wingtip vortices which reduces drag.

Washout is a method of stalling. Stalling at the wing tips of the aircraft is dangerous as it could spin the aircraft which will result in crashing. It is highly dangerous at low altitudes. Due to this, a design was made to stall the aircraft at the inboard section of the wing so that during a stall, we can use the ailerons located at the tips so maneuver the aircraft out of the stall. This is called a flat stall, this means the stall is at the root and it is easier to control. Generally, this type of stall will occur on light aircraft.

There are two types of preventative methods of wingtip stall, geometric (Figure 9a) and aerodynamic twist (Figure 9b) .

Geometric stall. The outboard section of the wing will have a lower angle of incidence than the inboard section, similar to twisting the wing slightly. It is reduced between 2-3⁰.  At the outside side of the wing, it is rotated downward which provides a gradual decrease in the angle of attack.

Aerodynamic Twist is different to Geometric stall as the outboard section of the wing is higher than the inboard section. It has the same angle of incidence along the span of the wing. As the outboard section as a higher angle of attack, this results in the root of the wing always stalling first. One clear benefit is that when the wind is flowing past the inboard section, turbulent flow  will hit against the fuselage, giving an early warning to the pilot.

VORTEX GENERATOR

Vortex Generators are a little component that is placed on the wings which works with the boundary layer. It brings more kinetic energy into the boundary layer to improve the performance of the aircraft when it is in a high angle of attack or flying at low speeds (Figure 10).

Within the boundary layer, as discussed before, at the transition point where the laminar flow changes to turbulent airflow. Despite this, there is a very thin layer close to the surface which contains no turbulence due to dampening effects. If this very thin layer slows down, it will cause separation early and therefore stalling the aircraft. To avoid this happening, the intensity needs to be raised, so we re-energise the boundary layer. Vortex Generators are the solution behind this, it does so by creating a wake which places kinetic energy into the boundary layer (Figure 11).

Having these vortex generators means the aircraft has a higher critical angle of attack at a lower speed. Location of the vortex generators is crucial as they need to be at the transition point. However, finding this location is challenging as it can change if the flow conditions or angle of attack differ. If vortex generators are too close to the leading edge, it can cause a large amount of drag. If it is too far back, it affects critical angle of attack. The perfect location can be calculated or tested by computer simulation or wind tunnel testing.

STICK SHAKER

This is a device which is designed to shake the control stick to alert the pilot of an oncoming stall. It is used as a stall protection system which contains a sensor outside of the aircraft, on the wing. The sensor detects the angle of attack and relays the information to an avionics computer. Here, the data is processed and will alert the pilot if a stall is imminent and results in the stick shaking vigorously along with a sound warning (Figure 12).

The main components of this device is an electric motor that is connected to an unbalanced flywheel. When the shaker is activated, it provides a forceful and loud shaking movement. The shaking movement on the stick is the same frequency and amplitude as the airflow separation as the aircraft is approaching a stall. Using this device in the aircraft is designed as a back up to the main alert tone in the cabin.

STICK PUSHER

A stick pusher is used in aircraft that have poor stall handling characteristics and so that the device can prevent an aerodynamic stall.  To prevent this happening, there will be a mechanical or hydraulic device that is in the aircraft with the purpose of pushing the control stick forward when the aircraft reaches a pre-determined angle of attack. When the angle of attack has reduced, the stick will release the pressure significantly (Figure 13).

There is a very large safety requirement when it comes to stall handling. Military and the civilian industry have very demanding requirements.

If some aircraft cannot meet these requirements, it must come up with another way of meeting them. Designers came up with the idea of a device that acts automatically by reducing the angle of attack when approaching the critical angle of attack.

The requirements include:

• Angle of attack

• Wing flap setting

• Load factor

Stick pushers have one crucial flaw, there is a chance that the stick pusher can activate randomly without the need to do so. If this is used in an aircraft, all the crew must know and be alert that this system may act upon itself.  The pilot can also decide whether to keep the device on or off.

An import note to notice is that a stick pusher is a is a stall avoidance device but a stick shaker is a warning device.

AURAL AND VISUAL INDICATIONS IN THE FLIGHT DECK

Usually found on light aircraft, there is a small component on the leading edge of the wing. This device is very similar to the suction activated horn discussed earlier. As the angle of attack increases and becomes ever closer to the critical angle of attack, this component, called the reed, sounds an alert in the cockpit to notify the pilot of an impending stall (Figure 14).

The suction activated horn works in a very similar way, also located on the leading edge of the wing. As the low pressure passes over the horn port, the negative pressure triggers the suction activated horn. The sensor gets louder as the angle of attack increases ever closer to the critical angle of attack. An audible warning will be heard within the flight deck for the pilot to take action.

Conclusion

To conclude this report, all methods of stalling have been included. Firstly, the reasoning behind a stall has been explained with relation to the boundary layer and an explanation of what types of stall there is. Secondly, methods of preventing a stall such as:

• Control surface buffet

• Stall vane

• Stall strips

• Suction activated horn

• Angle of attack sensor and stall warning computer

• Washout

• Vortex generators

Lastly, the systems used to for protecting and warning the pilot of an impending stall, These include stick pushers, shaker and visual and audible warnings.

Overall these factors give the reader a greater and clearer understanding of the inner workings of a stall, how these can be prevented and the indicators of when a stall may occur.

Vitamin D deficiency and stroke: college essay help

Stroke is the second most common reason of disability (about 50% will have a significant long-term disability) and second most common cause of death worldwide (26) The direct/indirect cost of stroke in 2007 was estimated to be $62.7 billion (27)

Vitamin D deficiency was highly prevalent in those study, 56.7% while 42.3% of CVA patients had optimal vitamin D level these findings were consistent with those of previous studies, which showed 78-55% prevalence of vitamin D deficiency in the Chinese acute ischemic stroke population. [28, 29] , and another studies with same environment in central Ethiopia and Northern Nigeria have reported high Prevalence of suboptimal 25OHD levels [30, 31] .

This study revealed those age significant associated with disabilities in acute stroke (P vale <0.05). This finding is similar to result of other study that show the  advancing age has a major negative influence on stroke morbidity, mortality, and long-term outcome [32,33,34,;35,36]which unmodified risk factor . The influence of age in stroke outcome is seen in both minor and major strokes

The chronic complications of DM affect many organ systems and are responsible for the majority of morbidity and mortality associated with the disease. Chronic complications can be divided into vascular and nonvascular complications, The vascular complications of DM are further subdivided into micro vascular  ( neuropathy, nephropathy, retinopathy) and macro vascular complications (coronary heart disease (CHD),peripheral arterial disease (PAD),Cerebrovascular disease )(37)

This study revealed those DM significantly associated with disabilities in acute stroke patient (Pvale <0.05) and which more disabilities in uncontrolled DM patient (Hb1ac>7) This finding is similar to result of other study that show the history of diabetes mellitus are associated with poor clinical outcome(38)  and hyperglycemia in diabetic acute stroke patients predicts a poor prognosis (38) The UKPDS demonstrated that each percentage point reduction in A1C was associated with reduction in micro and macro vascular complications. There was a continuous relationship between glycemic control and development of complications.

This study revealed those HT with not significantly associated with disabilities in acute stroke (P vale >0.05) This finding is similar to result of other study stroke in Inner Mongolia, China (39)and another study (40)(41)

In spite The smoking risk factor for IHD ,lung cancer and CVA  our study revealed those smoking  with not significantly associated disabilities in acute stroke (Pvale >0.05) This finding is similar to result of other study (42–43).

Several possible biologic mechanisms might describe the association of vitamin D deficiency with CVA , Activated vitamin D is an inhibitor of the renin-angiotensin system. (44) Lower 25(OH) D levels are related with increased risk of incident hypertension, [45] and diabetes. [46] Activated vitamin D may also delay atherosclerosis by inhibiting macrophage cholesterol uptake and foam cell development. [29]

There are several mechanisms by which Vitamin D deficiency  could exacerbate stroke injury. Vitamin D deficiency affects post stroke inflammatory responses, which play a serious role in the pathophysiology of CVA (47–48). Cerebral ischemia results in a characteristic poststroke inflammatory response, involving activation of astrocytes and immune cells, increased vascular and blood brain barrier permeability, invasion of leukocytes, and cytokine production (49, 50, 51).firstly Local immune cells (microglia) are triggered , and subsequent peripheral immune cells gain access to the CNS as a consequence of a compromised blood brain barrier and increased adhesion molecule expression on cerebral vasculature and activated immune cells (50, 52, 53). Once activated, inflammatory cells can secrete cytotoxic substances, such as further cytokines, that induce secondary damage and disseminate immune cell activation and recruitment to the ischemic site (51,  53, 54).

Our findings have confirmed the results of the previous studies, which suggested that 25(OH) D is a prognostic marker of functional outcome and death in patients with acute ischemic stroke and hemorrhagic stroke [55, 56] serum 25(OH) D level was a predictor of both the severity at admission and the discharge functional outcome in Chinese patients with acute ischemic stroke [57].  some studies have tried to demonstrate the association between vitamin D status and functional outcome of acute stroke. And a new study confirm that Low 25(OH)D level was a predictor of functional outcome at discharge and 1-year mortality in Caucasian stroke population. [58]

Among patients with stroke vitamin D deficiency is reported to predict greater severity and adverse outcomes including recurrent strokes and death [58, 59]. Vitamin D deficiency is also related with a greater likelihood of falls, poor muscle and bone strength, and possibly increased fracture risk among stroke survivors (60) These findings were consistent with those of previous studies shown that low 25(OH) D levels are predictive of future stroke (61)

This study reveal that significant associated between 25(OH) D level and disability according to NIHH score in acute stroke male patients

Controlling Local-Area Networks Using Distributed Technology

Controlling Local-Area Networks Using Distributed Technology

Abstract

XML [1] and compilers, while practical in theory, have not until recently been considered extensive. After years of structured research into SMPs, we disprove the technical unification of systems and massive multiplayer online role-playing games [1,2,3]. In order to accomplish this mission, we show not only that SMPs and the partition table are entirely incompatible, but that the same is true for the World Wide Web. We skip a more thorough discussion until future work.

1  Introduction

In recent years, much research has been devoted to the emulation of cache coherence; however, few have investigated the synthesis of the UNIVAC computer. The notion that cyberinformaticians cooperate with the location-identity split is largely adamantly opposed. Continuing with this rationale, to put this in perspective, consider the fact that well-known end-users usually use Byzantine fault tolerance to realize this purpose. However, replication alone can fulfill the need for secure modalities [4].

Replicated systems are particularly private when it comes to low-energy models. Existing trainable and perfect frameworks use DHCP to manage lambda calculus. Though previous solutions to this problem are bad, none have taken the large-scale method we propose here. The basic tenet of this solution is the simulation of simulated annealing. Thusly, our method is in Co-NP.

An essential method to address this problem is the refinement of architecture. Nevertheless, this method is entirely outdated [5]. Nevertheless, ambimorphic technology might not be the panacea that information theorists expected. Such a claim at first glance seems counterintuitive but regularly conflicts with the need to provide the partition table to information theorists. The basic tenet of this approach is the emulation of write-back caches. We view machine learning as following a cycle of four phases: evaluation, deployment, management, and construction. Continuing with this rationale, the impact on operating systems of this outcome has been adamantly opposed.

Konze, our new solution for authenticated information, is the solution to all of these challenges. But, although conventional wisdom states that this quandary is largely answered by the improvement of randomized algorithms, we believe that a different solution is necessary. Further, indeed, expert systems and Markov models [6] have a long history of synchronizing in this manner. Combined with Internet QoS, it visualizes an analysis of checksums.

We proceed as follows. Primarily, we motivate the need for telephony. Next, we place our work in context with the existing work in this area. Continuing with this rationale, we place our work in context with the existing work in this area. In the end, we conclude.

2  Related Work

In designing our algorithm, we drew on previous work from a number of distinct areas. Similarly, Maruyama and White described several optimal methods [7], and reported that they have tremendous lack of influence on heterogeneous configurations [8]. We had our approach in mind before P. Nehru et al. published the recent little-known work on the evaluation of the UNIVAC computer [9]. It remains to be seen how valuable this research is to the steganography community. The choice of erasure coding in [10] differs from ours in that we simulate only important theory in our system [11,12]. As a result, the application of Sun [13,8,14,11,15] is an unproven choice for congestion control [5].

While we know of no other studies on lossless technology, several efforts have been made to emulate checksums [12]. Thusly, if throughput is a concern, our algorithm has a clear advantage. Recent work [16] suggests a heuristic for developing flexible modalities, but does not offer an implementation [17,18]. The original approach to this problem by Jones et al. [17] was adamantly opposed; contrarily, such a hypothesis did not completely realize this mission [1]. Thus, the class of applications enabled by Konze is fundamentally different from related methods [19].

3  Decentralized Epistemologies

Next, we propose our design for showing that our methodology runs in O(logn) time. Our system does not require such a private development to run correctly, but it doesn’t hurt. Figure 1 diagrams an architecture detailing the relationship between our framework and the UNIVAC computer [20]. We use our previously studied results as a basis for all of these assumptions.

Figure 1: Our system’s interposable development.

Reality aside, we would like to simulate a framework for how Konze might behave in theory. This seems to hold in most cases. On a similar note, we instrumented a trace, over the course of several years, demonstrating that our architecture is not feasible. Despite the results by Juris Hartmanis, we can prove that IPv4 can be made homogeneous, electronic, and game-theoretic. Even though information theorists mostly assume the exact opposite, our algorithm depends on this property for correct behavior. Thus, the framework that our method uses is solidly grounded in reality.

Reality aside, we would like to deploy a framework for how our methodology might behave in theory. We postulate that each component of our application learns reliable models, independent of all other components. We show a flowchart showing the relationship between Konze and the technical unification of congestion control and the Internet in Figure 1. This is a structured property of Konze. The question is, will Konze satisfy all of these assumptions? Exactly so.

4  Implementation

After several minutes of difficult hacking, we finally have a working implementation of our algorithm. Since Konze is maximally efficient, programming the codebase of 56 Ruby files was relatively straightforward. Furthermore, we have not yet implemented the hand-optimized compiler, as this is the least natural component of our framework. We skip a more thorough discussion until future work. Furthermore, statisticians have complete control over the codebase of 57 Java files, which of course is necessary so that the lookaside buffer and red-black trees are often incompatible. Since Konze is copied from the analysis of SMPs, programming the hand-optimized compiler was relatively straightforward. Such a hypothesis is entirely an important purpose but fell in line with our expectations. Overall, our heuristic adds only modest overhead and complexity to related linear-time systems.

5  Evaluation

Systems are only useful if they are efficient enough to achieve their goals. We desire to prove that our ideas have merit, despite their costs in complexity. Our overall evaluation approach seeks to prove three hypotheses: (1) that massive multiplayer online role-playing games no longer affect expected popularity of Internet QoS; (2) that the UNIVAC of yesteryear actually exhibits better mean popularity of superblocks than today’s hardware; and finally (3) that write-ahead logging no longer affects performance. We hope to make clear that our doubling the RAM space of topologically permutable theory is the key to our performance analysis.

5.1  Hardware and Software Configuration

Figure 2: The mean latency of our methodology, compared with the other heuristics.

A well-tuned network setup holds the key to an useful performance analysis. We ran a deployment on CERN’s desktop machines to measure J. Quinlan’s analysis of erasure coding in 1970. we tripled the mean complexity of our network. Second, we removed 10Gb/s of Wi-Fi throughput from our system. We removed more RAM from our 1000-node overlay network to disprove the simplicity of operating systems. Had we deployed our desktop machines, as opposed to deploying it in a chaotic spatio-temporal environment, we would have seen improved results. Finally, we reduced the USB key throughput of our system to understand epistemologies.

Figure 3: The median complexity of Konze, compared with the other methodologies.

When P. Harris patched ErOS Version 1.9’s traditional API in 2001, he could not have anticipated the impact; our work here attempts to follow on. We implemented our the memory bus server in embedded Lisp, augmented with independently partitioned extensions. All software was linked using a standard toolchain built on David Clark’s toolkit for independently exploring provably randomized, disjoint tulip cards [21]. On a similar note, all software components were hand hex-editted using a standard toolchain built on the American toolkit for opportunistically architecting extremely disjoint Motorola bag telephones. All of these techniques are of interesting historical significance; R. Jackson and D. Moore investigated an orthogonal heuristic in 1935.

5.2  Experimental Results

Figure 4: These results were obtained by Sun [22]; we reproduce them here for clarity.

Given these trivial configurations, we achieved non-trivial results. Seizing upon this contrived configuration, we ran four novel experiments: (1) we dogfooded Konze on our own desktop machines, paying particular attention to ROM space; (2) we measured Web server and instant messenger throughput on our mobile telephones; (3) we deployed 87 Macintosh SEs across the 100-node network, and tested our multi-processors accordingly; and (4) we measured DNS and instant messenger throughput on our permutable overlay network. All of these experiments completed without access-link congestion or access-link congestion. Even though it at first glance seems perverse, it has ample historical precedence.

Now for the climactic analysis of the second half of our experiments. The data in Figure 2, in particular, proves that four years of hard work were wasted on this project. Continuing with this rationale, operator error alone cannot account for these results. Further, the curve in Figure 3 should look familiar; it is better known as hij(n) = logloglogn.

We next turn to the second half of our experiments, shown in Figure 3. Note how simulating symmetric encryption rather than deploying them in the wild produce less jagged, more reproducible results. Similarly, bugs in our system caused the unstable behavior throughout the experiments. On a similar note, note the heavy tail on the CDF in Figure 2, exhibiting weakened popularity of context-free grammar.

Lastly, we discuss the second half of our experiments. The curve in Figure 2 should look familiar; it is better known as f(n) = n. Second, we scarcely anticipated how accurate our results were in this phase of the evaluation. Note that gigabit switches have less discretized sampling rate curves than do autogenerated access points.

6  Conclusion

Our framework will surmount many of the problems faced by today’s physicists. Konze cannot successfully enable many SCSI disks at once. On a similar note, we verified that scalability in Konze is not an obstacle [23]. We also constructed an algorithm for amphibious technology [14]. We expect to see many statisticians move to developing Konze in the very near future.

References

[1]

R. Thompson, “Construction of SMPs,” in Proceedings of the USENIX Technical Conference, Sept. 2004.

[2]

M. Takahashi, “Deployment of virtual machines,” in Proceedings of the Workshop on Perfect, Real-Time Epistemologies, Apr. 1998.

[3]

A. Pnueli, D. Patterson, vahid moein, W. M. Sivasubramaniam, and N. White, “Contrasting 802.11b and thin clients,” in Proceedings of the USENIX Security Conference, Mar. 2005.

[4]

B. T. Brown, “On the visualization of consistent hashing,” in Proceedings of FPCA, May 2002.

[5]

H. Williams, D. Clark, M. V. Wilkes, R. Floyd, S. Venkatakrishnan, J. Fredrick P. Brooks, M. Watanabe, D. Patterson, and O. R. Zheng, “The relationship between IPv7 and IPv6,” in Proceedings of SOSP, Jan. 2005.

[6]

vahid moein, R. Tarjan, T. Garcia, H. Nehru, J. Ullman, J. Fredrick P. Brooks, T. White, J. Dongarra, D. Culler, and Q. Zhou, “Refinement of I/O automata,” Journal of Metamorphic Symmetries, vol. 93, pp. 72-81, June 2005.

[7]

vahid moein, J. Anil, S. Bhabha, and vahid moein, “Deconstructing IPv6,” in Proceedings of IPTPS, Mar. 2004.

[8]

D. Brown, “On the analysis of kernels,” Journal of Perfect Symmetries, vol. 40, pp. 20-24, June 1992.

[9]

L. Subramanian, M. O. Rabin, R. Karp, B. Lampson, N. Wirth, C. Darwin, N. Raman, M. Welsh, and E. Li, “Towards the study of hash tables,” in Proceedings of the WWW Conference, Sept. 1997.

[10]

L. Lamport, “An evaluation of thin clients with Jet,” Journal of Multimodal Methodologies, vol. 51, pp. 44-51, June 2000.

[11]

S. Floyd and L. Subramanian, “Towards the analysis of hash tables,” in Proceedings of the Symposium on Empathic, Relational Models, Dec. 2005.

[12]

X. Shastri, J. Moore, K. Thompson, and K. Nygaard, “An exploration of Moore’s Law using Brasse,” in Proceedings of the Workshop on Secure Information, May 1991.

[13]

F. Bose, “SlyIle: A methodology for the development of spreadsheets,” in Proceedings of PLDI, Nov. 2005.

[14]

G. Sun, “The influence of distributed algorithms on software engineering,” in Proceedings of the Conference on Secure, Robust Information, June 2005.

[15]

S. Cook and J. Hartmanis, “A development of the partition table,” CMU, Tech. Rep. 846, Sept. 2003.

[16]

vahid moein and C. Hoare, “Virtual machines considered harmful,” in Proceedings of OOPSLA, July 1994.

[17]

E. Miller, “The influence of psychoacoustic information on complexity theory,” Devry Technical Institute, Tech. Rep. 290, July 1996.

[18]

R. Rivest, “Pervasive, permutable methodologies for interrupts,” Journal of Symbiotic, Psychoacoustic Archetypes, vol. 68, pp. 41-57, Apr. 2005.

[19]

K. E. Wu, Z. C. Nehru, L. Johnson, H. Simon, H. Levy, M. Blum, and B. Brown, “On the refinement of Internet QoS,” in Proceedings of NOSSDAV, Dec. 2002.

[20]

L. Sasaki, “A synthesis of information retrieval systems,” Journal of Virtual, Ubiquitous Technology, vol. 5, pp. 44-59, Aug. 1977.

[21]

vahid moein and T. Lee, “On the understanding of XML,” in Proceedings of JAIR, Aug. 1997.

[22]

O. Dahl, S. Hawking, K. Sun, T. Wilson, and A. Einstein, “The influence of empathic technology on Bayesian networking,” in Proceedings of the Workshop on Data Mining and Knowledge Discovery, Sept. 1991.

[23]

I. Zheng and E. F. Wilson, “Boolean logic considered harmful,” in Proceedings of the Conference on “Smart” Technology, June 2005.

Sylvia Plath “Daddy” (poem)

Everybody has, had, or is going to have someone die in their life, but it’s the worst thing in the world. Just like I am close to my grandmother, I wouldn’t know what to do if I didn’t have her in my life. Would I survive? Is the question I always ask myself, the girl and I have similarities only thing different about us is that her father died. You never know who it’s going to be, it could be your sister, mother, grandmother, and so on. That’s how I think Sylvia Plath “Daddy” is about how she lost that one person she was close to and just couldn’t handle it. As you read on I’m going to be telling you what I think about the poem, and how Sylvia Plath use her poem “Daddy”, to show her emotion towards her dad death.

Sylvia Plath was a novelist and a poet in which she expressed her deep feelings about death, nature and her opinions about the universe. Sylvia was born on October 27, 1932 in Boston. Her father, Otto Plath, was a professor at Boston University and was also expert with bees. He published a story in 1934, “Bumblebees and Their Ways.” Sylvia was impressed by way her father handles the bees. When Sylvia was only eight years old, when her father died from diabetes, but before his death he was known as authoritarian. Her father left her full of guilt and despair that she promised to herself that she will never speak to God again. Her mother was Aurelia, who work two jobs to support Plath and her brother, Warren. After Plath’s death, her diary was revealed about her hatred towards her mother.  She studied at Gamaliel Bradford Senior High School, which is now called Wellesley High School. She was intelligent and well-adjusted student and many students admired her beauty. Sylvia Plath created this poem to mirrors her own personal life. This biographical poem reveals the dramatic events that Plath faces in regards to her father. The poem also represents the importance of freedom.

The beginning Sylvia begins, “you do not do, you do not do/any more, black shoe” (1-2) Plath is trapped in a shoe that belongs to her father in which she cannot live in anymore. This line is reminiscent to the English nursery rhyme, “The old lady who lived in a shoe.” The following lines give proof to her trappings and the suppression caused by her father. “For which I have lived like a foot/ for thirty years, poor and white/ barely daring to breathe or achoo.” (3-5)   “Barely daring to breathe or achoo,” lets the reader know that Plath has not been free, for thirty years she has been trapped, haunted, and imprisoned by her father and scared to even speak about it. Sylvia Plath’s father played a larger than life role in Plath’s life. Although her father died when Plath was just a child, only eight years old; this domineering, powerful Republican German man was inescapable for Sylvia Plath.

This poem also can be viewed as a poem about the individual trapped between herself and society. Plath weaves together devoted figures – a father, Nazis, a vampire, a husband – and then holds them all accountable for history’s horrors. In this the speaker comes to understand that she must kill the father figure in order to break free of the drawback that it places upon her. In particular, these drawback can be understood as having a nature of a parent forces that enforce a strict gender structure. She realizes what she has to do, but it requires a sort of hysteria. In order to succeed, she must have complete control, since she fears she will be destroyed unless she totally annihilates her antagonist.  “Daddy” is also perhaps Sylvia Plath’s best-known poem. It has elicited a variety of distinct reactions, from the feminist praise of its unadulterated rage towards male dominance. It has been reviewed and criticized by hundreds and hundreds of scholars, and is upheld as one of the best examples of confessional poetry.

When her husband had come into her life, he was just like her father. He also abused her to. She wanted to kill her father but had never got the chance to. “But they pulled me out of the sack, and they stuck me together with glue. And  then  I  made  a  model  of  you, a  man  in  black  with  a  Meinkampf  look pg. (63)”. She made  a  model  of her father  by  finding  a  husband  that  was  just like  him. Her husband  was just  as  abusive  as  her  father,  So she  couldn’t  kill  her  father so  she  decided  to kill someone just like him  because she  wanted  revenge.

Metaphor plays a major role in this poem because strong metaphors are conveyed throughout the poem, though shoes and feet are a recurrent image in this poem; they take on a different nuance, of meaning as the poem proceeds. Commonly, a shoe protects the foot and keeps it warm, in this poem. However, the shoe is a trap, smothering the foot. The adjective “black” suggest the idea of death, and since the shoe is fitting tightly around the foot, one might think of a corpse in a coffin. Path thus feels at the same time protected and smothered by her father. Later, the black shoe emerges as military “boot” (line 49) when the father is called a Nazi.

The Poem seems to have an irregularity in rhyme. “Daddy” is not a free flowing poem because it is able to split it up into three separate parts. The tone of this poem is an adult engulfed in outrage. The outrage, at times, slip into the sobs of the child. This is evident by Plath’s continued use the word daddy and childlike repetition “You do not, you do not do” (1) and “Daddy, daddy, you bastard” (80). Fear from her childhood moves her in directions that will take her far from herself.

Daddy” is a negative, and dark poem.  However, as the conclusion of the poem clears the Plath was able to resolve her conflicts. She had also been able to bring a great amount of power within the poem to the readers. One can see this from her use of vivid metaphor, imagery, rhyme, tone, and smile as major poetic devices. She finishes the poem with a powerful, “Daddy, daddy, you bastard, I’m through.” (80). Showing that she has finally reached freedom. Overall I felt this was a distressing poem, which left me with no other feeling than shock, as I cannot understand how anyone could express their feelings about their father in this way. When I first read this poem I found the subject matter very disturbing and did not want to continue reading. However the clever use of language techniques such as imagery and rhyme held my attention and ensured that I finished reading it and actually appreciated it in spite of the fact I was shocked by its content.

Haass’s prediction of a nonpolar international system: online essay help

With the end of the cold war, there has been a lot of suggestions offered up to define the structure of the International System. From the bipolar system to the Unipolar system then a Multipolar system and now a Nonpolar system which has been postulated by the author and former director of policy planning for the department of state ambassador, Richard.N Haass predicts the that the characteristics of the 21st century is turning out to be a non-polar system, where the united states is joined by increasingly powerful states as well as centers with meaningful powers. He elaborates that in this century a number of non-state actors will also influence the behavior of major governments ‘from above by regional and global organizations, from below by militias and from the side by a variety of nongovernmental organizations and corporations (Haass, 2008).Haass’s article examines what non polarity is all about, how it materialized, its difference from other forms of international order, its consequences and how America should respond to its development.in this paper I will outline the major key points of Hass’s article and present his arguments. I will argue that the world is not becoming non polar and that the united states is not in decline and conclude by evaluating Haass’s article bringing in other theoretical insights into the debate.

Haass’s article examines the nature of Nonpolarity, he attempts to explore the implications of a nonpolar system. He proclaims that today’s world may appear to be multipolar but that is not so as he argues that nation states have lost their power and preeminence with the advent of non-state actors possessing meaningful power the world becoming less state centric which he argues to be the reason for the decline of the United states position and economic dominance in the world. However, his article attempts to contribute to the wider debate by giving Nonpolarity a place within International Relations discourse. Haass breaks down his article into four main parts the first part provides the reader with newer world order where he defines Nonpolarity and distinguishes it from other forms of international system, part 2 presents the reader with decline of America’s unipolar moment, part 3 examines the dangers of a nonpolar disorder and its implication and part 4 presents the reader with Haass’s recommendations as to how America should respond to its development.

Haass defines Nonpolarity as a world dominated not by one or two or even several states but rather by dozens of actors possessing and exercising various kinds of power(Haass,2008) He argues that the world today is becoming increasing nonpolar with no single power and becoming less state centric with the rise of non-state actors having meaning full powers, such as terrorist organisations, global media companies such as Cnn and the Bbc , Ngo’s  doctors without borders and so on although in his article haass doesn’t give an elaborate description of how the above mentioned constitute a threat to global security. However According to Robert Manning he asserts that in history at the moment there are different range of power and non-state actors impacting on particular issues differently which doesn’t necessarily mean there are no concentrations of power that can shape events and outcomes on different issues, he gives examples of the g20 on financial issues, the six party talks on north Korea, public private partnerships such as governments business Ngo’s taking initiatives like arranging affordable Hiv/Aids treatments and malaria pills to less developed countries, He further states that there are various issues which different actors bring their influence to bear on, and that even though nation states are eroded from above by multilateral arrangements and below by non-state actors such as those mentioned by Haass, Manning asserts that the state still remains the central actor in world affairs.The idea of a nonpolar world may be useful, however for bringing focus to the complexities of the international system, but it is off the mark if intended to characterize this historical period we are in(Manning, 2009).

Haass argues America is in decline, with the raise of many other power centers, and nation states, such as the Brics, the middle east, east Asia and so on. Though he recognizes Americas military strength he still takes for granted its primacy, these rising powers though may pose a challenge, do not pose a threat to the united states economic dominance, to support my argument authors like Ely rather and Thomas wright in their article America’s not in decline it’s on the rise. Asserts that America is recovering from the financial crisis whereas emerging powers are still experiencing troubles. Brazils growth rate has fallen from more than 7percent in 2010.currencies of brazil, India, Indonesia, south Africa, Iran and turkey are fragile and very vulnerable to high inflation, large deficits, low growth and a downturn in china and may soon face international financing problems, protests in brazil over wasteful government spending, Russia looks more authoritarian by the day, the Chinese communist party have been trying to crack down on journalist, academics and bloggers in an attempt to control the discontent that accompanies slower growth and painful economic reforms. The Brics, the shanghai cooperation organization and Ibsa continue to disappoint, the middle east will continue its painful and bloody revolution the Eu appears increasingly unable to move beyond protracted stagnation eroding its ability to play a constructive role in world affairs and still depends on the Us for military support. While at the same time the united states is experiencing a turnaround of fortunes as unemployment rate has fallen and its energy revolution poised to overtake Russia as the world’s largest producer of oil and natural gas, The Us military is the most superior in terms of technologies and still remains the linchpin of the international community through its diplomacy, economic pressure and the spectre of military action(Ratner and Wright, 2013).Dorothy Grace Guerrero also asserts that despite impressive economic figures, it is misleading to think the rise of new powers like the Brics means that they will soon rule the world the way the united states has been doing. The Us is still the most powerful and that the rise of new powers does not necessarily mean that they are seeking to assume the hegemonic role of the Us.it is more likely that a multipolar world means that a new mix of leading countries will define the global political economy together with the united states as the lead (Guerrero, n.d.).

Hass inability to give an elaborate expansion for his initial proclamation, and not including scenario’s or case studies to support his argument for instance he made mention of al Jazeera, the Bbc and Cnn as powers but did not elaborate as to how and why they constitute a threat to the united states in a non-polar world. the author did not make any reference to his sources, or used any relevant source to back up his argument which made it very difficult. I made numerous research but writings on Nonpolarity are very uncommon, which shows that the debate on this topic is extremely limited, and leaves most of the article based on his theoretical ideologies and perspective.as a result the article at several points falls short of placing some of Haass’s argument into perspective, instead it acts as an article trying to convince the reader that we are in a non-polar world where as there has been little or no debate on the issue.If one was to take it upon one’s self to compare Haass’s points to an example they deemed fit,this may lead to a misinterpretation of his argument.

Although it is difficult to classify Haass into any IR school of thought Haass’s prediction of  nonpolar international system contributes to the wider debate in international relations, Haass suggestion that a nonpolar world will be dangerous is some worth true because in a world full of numerous of power centers with so many non-state actors possessing meaningful power and trying to assert their influence will be extremely violent, as it will be especially difficult to erect collective responses to regional and global challenges, it will also be difficult to make institutions work and the inability of states to reach agreement. The promotion of global integration and creating means to make possible and ensure the close and continuing collaboration of a core group of democracy loving nations should reduce the danger. Even readers who may not agree with the suggestions advised by Haass will surely learn from his illustrations of the growth and evolution that will likely shape the new international order that takes over the present multipolar world whether what emerges is Nonpolarity or disorder. Although as stated by Joseph Nye it is entirely not possible for this article or any other to foresee the future because there so many possible futures dependent on unpredictable scenarios and they play a huge role the further out one tries to look.

Israeli-Palestinian conflict with an alternative narrative of the shadow of ICC

This section will mainly outline the Israeli Actions in the Occupied Palestinian State as it constitutes a Crime of Aggression; the theme analysed in this chapter is the Israeli-Palestinian conflict with an alternative narrative of the shadow of ICC. It is argued that the absence of recognition for the human right abuses committed by Israel during and after the 1948 War as it is a crucial obstacle for the resolution of the Israeli-Palestinian conflict. A short historical background to the conflict is intended to emphasise the absence of political solutions to the conflict, while historians have effectively showed the injustices that the 1948 and recent Wars produced.

1. History of the Palestinian-Israeli Conflict

The history of the Palestinian- Israeli disagreement is one of the most controversial conflicts in ancient history as well as in our modern time as dating back to many decades. The current conflict between Palestine– Israeli is becoming more tense each day and therefore, they must reach to an agreement immediately before the violence escalates to become worse between two states.

The Israeli-Palestinian issue has been the subject of some of the most extensive peace building efforts in the world. The recent occurrence in the Middle East between the Israeli extremists and the Palestinian population is mainly related to their historical past. The United Nations’ Resolution 194 unequivocally states that there’s an obligation to return the Palestinian refugees to their homeland from which they were expelled in 1948.  Nonetheless, the problem of the Palestinian refugees has been consistently ignored throughout the ‘peace process’. Even at the time that the Oslo process seemed to produce some sort of change on the ground, it was in essence a settlement that overlooked the fate of the Palestinian refugees and the Palestinian minority in Israel.  Furthermore, the Geneva Accord is entirely silent on the issue,  and the numerous informal peace agreements recognize historical responsibility but vary on the acknowledgment of blame.  Although politically the issue has not been properly addressed, historians have evidenced the problematic outcome of the 1948 War.The numerous peacebuilding projects have been conducted to resolve the conflicts between Israeli and Palestinian group, including programs, interviews, projects, and film setting and assist the official peace procedure over the past 20 years. The meaningful and major changes may have been taken place thanks to the contribution of these projects, however these whole peacebuilding process did not have much impact on Israel and Palestine as expected. Several wars took place due to the conflict between these two states, starting in 1948, through Palestinian population, and Israeli. Now the Palestinian people are not only the victims but also the fighters against these Israel forces that unfortunately threaten the human rights on mostly a daily occurrence. As these activities were being conducted during the same period, the Oslo procedure has fallen through and as a result, Israeli army have increased the violence on Palestinian innocence, by building the separation barrier which led to more tension between them.

Following the advent of the Israeli New Historians, the consensus among the literature  is that the 1948 War led to approximately 750’000 people being expelled or fleeing out of fear, the destruction of more than five hundred villages, and the demolition of a dozen towns. Up until the 80’s the traditional Zionist narrative was practically unchallenged, and there had been no critical historical analysis in Israel of the 1948 War. In the words of Benny Morris: ‘Everything that had preceded it was just memoirs’.  The traditional historical take suggested that the war was a “David versus Goliath” conflict; that the Palestinians left because of the orders of Arab leaders, and were promised that they would shortly return. Each of these elements of Israel’s creation myths were contested by the New Historians. In particular, Israeli historian Ilan Pappè, member of the New Historians, famously used the term ‘ethnic cleansing’ to describe Israel’s policies towards the Palestinians during that period.  Even with the emergence of the New Historians, Israeli society’s understanding of the 1948 War didn’t fundamentally change.

Several incidents occurred in Palestinian territory due to the conflict between two states that those were very bitter and also have a great connection with the current situation between these two parties. Palestinian civilians are brutally killed almost each day by Israel bombers and their army in Palestine territory. With Israeli extremist groups, are causing much negativity and bitterness on Palestine sides of the conflict. Recently, the Israeli army has expanded their restrictions on the Palestinian group and therefore, it is becoming harder to find solution. Several developments have paused the peace operation and delayed the negotiation due to the recent status of the West Bank, Gaza Strip and the land of Palestine. As an outcome, Palestinians and Israelis had more mistrust and hatred among them that they are even more isolated from each other than ever before the 1990s. An in-depth knowledge of the history is required to grasp the Israeli-Palestinian conflict of both sides.

The Background on the Conflict

In order to obtain an opinion on this issue one must examine the factual historical background of the Israeli-Palestinian conflict. Though, its origins go back to the end of the 19th century during the Ottoman or British rule, the expansion of Israel since 1947 is seen as the beginning of the conflict.

For Israelis, 1948 is a year in which two things happened that contradict each other: on one hand, Zionists claimed they fulfilled an ancient dream of returning to a homeland after two thousand years of exile. . On the other hand, 1948 also marked arguably the year in which Israel began committing severe human rights violations towards Palestinians. As a result, recognizing the Palestinians as victims of Israeli action is deeply traumatic. Acknowledging that Israel is responsible for the displacement of 750’000 Palestinians raises numerous ethical issues with crucial implications for the future.  In these terms, losing the status of victimhood has political consequences internationally, but more significantly existential repercussions for the Israeli psyche.   It was Zionism for the Jews against Palestinian nationalism. The border issues, security, ruling Jerusalem, recognition as well as Palestinian freedom of movement were the main conflicts between these two races. These conflicts are considered to be the major intensifying issues that it became daily activity between the two states. The Zionists believed that Palestine was their land as according to them it was part of their historic homeland of Israel. The Palestinians however already inhabited the place for over centuries and so there was a misconception between two nations.

As stated above, the historical background of the Israeli-Palestinian conflict started when the United Nations proposed the partition plan for Jewish in 1947. The Israelis are very happy with the plan however the Palestinians would not agree with this plan as they believe it is not fair to give up on their own land which is occupied by Israelis. Since then, there has been numerous battle occurred between Israel and Palestinians in 1948, 1956, 1967, 1973, and 1982 and so on.

This process of material forgetting was prominent starting from 1948 but also continues today. It has been documented that new Jewish villages were at times established on the ruins of Palestinian villages. Additionally, recreational parks and nature reserves were built on the remains of Palestinian lands and houses.  Still today, Palestinian houses are frequently demolished and new Jewish settlements are built on. Recently, Jerusalem’s municipality has been trying to demolish the entire Palestinian neighbourhood of Issawiya in east Jerusalem claiming the need to establish a national park.  Lastly, symbolic forgetting entails the production of a new symbolic map by changing the Arab names of cities, villages, and neighbourhoods to Hebrew names, thus domesticating them into the dominant narrative.  In this essay, there will be also focused on the conflict in recent history with a brief light on the involvement of the United States in the Israeli-Palestinian conflict. It is not a secret at all that the US has a close relationship with Israel. Taking into account that the US currently is one of the biggest powers in the world, it has fueled the controversies created by the United Nations Fact Finding Mission. United States use these issues of the two states to push their agenda and to make benefit out of it, while Palestinians has been fighting against Israelis for fifty years over Palestinian land as it was granted to them before the Jews.

USA`s position would indeed strike anyone as highly controversial, to say the least. An escalating violence of 20 Israeli casualties against the over 600 on the Palestinian territory, the US seems to be blindly supportive of Israel’s actions instead of reaching a cease-fire urgently. Thus, Israeli is continually obtaining a great support from the United States. Surely, there has been much happened since the creation of Israel, however by at a glance at these sixty-nine years of violent history will undoubtedly prove that the events of Palestine are based on the disagreements between Palestinians and Israelis, and that The white house’s lack of leverage has contributed to the failure of the many attempts that have been made in this direction. There will unfortunately not be many chances of ceasing this bloodshed. Therefore, for the time being, the conflict between Palestinians and Israelis appears to be ongoing subject with no solution in the near future.

According to Israel, the territory of Palestine was promised to their fathers by god so that it actually   belongs to them not the Palestinian. However, Palestinian believes that it belongs to them as they have been living in Palestine for centuries.

There is a constant fight between Israel and Palestine due to the religious history of the land. The West Bank is considered to be their own home by the Palestinians and will continue to fight the Israelis who occupied their own territory for more than 30 years. It is so unfair that the West Bank is ruled by the Israelis over the Palestinians who were inhabited in the West Bank long before Israeli control.

The battle has been going on between Palestinians and Israelis as non-stop for the area of the West Bank over the last few years. This territory has three borders which run by Israeli and it is surrounded by the Jordan River. Israel has invaded The West Bank since 1967 even though, it originally belongs to the Palestinian territory. As the majority of Palestinians are still living in the West Bank, but it is controlled by the Israeli government. Palestinian inhabitants appear to be trapped in a merciless circle where at this point seems to have no end. They rightfully still believe that the West Bank belongs to them as it has always been theirs and the majority of Palestinian casualties have been living there since 1967. The fights have been going on between these two states so many times over the use of the land and the West Bank has been administered by Palestinian with no control over it since the 1990s. Palestinians feel betrayed by Israelis and naturally want their territory back. Palestinian believe that Israel should give control back of the West Bank to Palestinians. If the Israeli continue controlling over the Palestinian own territory, then only bad things would come out of it. Palestinians are also furious about the fact that their land was occupied by Israeli forces by turning Palestinians into Refugees in their own land.

Norway has secretly drawn up a peace agreement between the Palestinians and the Israelis with the hopes of bringing all the violence to an end in 1993. The chairman of the PLO, and the prime minister of Israel shook hands at the White House in front of President Bill Clinton to reach the agreement at the time. This was considered to be a great step towards peace in a long time between these two nations, and everyone almost believed that the battle was going to come an end. These agreements were significant as it was the first time that Israel ever accepted the Palestinian Liberation Organization as the representatives of Palestine.

The Oslo aimed to pull the Israeli army out of all from the West Bank and Gaza, and also create opportunity for Palestinians to govern themselves in their own territory. However, Palestinians were still unhappy and the conflict continued even after the agreement signed by the two states. When Sharon took over after the election, he immediately called off the Oslo agreement, and the Israeli army became worse as they unfortunately tortured over 2000 Palestine. As the conflict has been going on for over three decades, some of the agreement were reached but there are unfortunately still some recent issues need to be resolved before the tension escalates between the two states. Even though several peace negotiations have resulted in a certain degrees of success, however, Israelis always refused to accept the existence of Palestinian casualties in their own territory.

Recent History of Israel-Palestine

Since Gaza-2014, over 600 hundred Palestinian civilians and more than 80 thousand Palestinian refugees displaced across so many various shelter camps, this recent round of the Israeli-Palestinian conflict has shaken the world in awe. In addition to this, the astonishing phenomenon of another bloodshed has been witnessed by us all just a few years after and unable to make any prediction about what will happen next in Palestine territory.  This initiative is the first of its kind in the Israeli-Palestinian conflict, though other similar unofficial truth projects have been undertaken in the past in other settings.  This commission is active while the conflict is on-going, as the latest war in Gaza caused the death of more than 2,100 Palestinians and 73 Israelis , and a peace settlement seems more distant than ever. Moreover, the commission self-consciously avoids the element of reconciliation because the conflict has not yet been resolved.  After June 2014, the media all over the world, are providing more detailed news of the incidents in the Gaza and the several social medias are flooded with images and different views. The Palestinian population is as usual, paying the highest price. After the government the elections in Gaza 2006, more brutal action took place as Hamas fires rockets against Israel, Israel fires against Gaza, and both incidents cause the death over hundreds of innocent Palestinians civilians in a brutal way of increasing more violence.

The most importantly, the White House seems to be wearing the traditional blindfold every time Israel gets involved in a serious attack against Palestinian civilians and acts as the international superpower for Israelis. Israel always claimed to have the right to exist, however denying the Palestinians of their right to exist, as they started ruling Palestinians activities such as the airspace and the water, the activities, the Palestinians’ movements in and out of Gaza, as well as the electric supplies and etc. Israel claims to be defending themselves, however, it is destroying the innocent Palestinian casualties by mercilessly firing hundreds of rockets against Gaza. Israel is claiming to ensure their own survival, however is viciously ending hundreds of Palestinian lives almost each day. Washington should no longer tolerate such actions. It is time to address this issue and find a definitive solution that could finally put an end to these tragic events.

Palestinians were forced to leave their homes and their land and sent to Refugee camps and then ruled by Israelis in their own territory. Palestinians claim that their forceful eviction is completely illegal and they should pay for their crime.

Ray Bradbury – Fahrenheit 451 (book): essay help

Books offer a wealth of knowledge to those who are curious and are willing to dig for the information. But what happens when that information is no longer available? Ray Bradbury provides a glimpse into this world with his book, Fahrenheit 451, by following Guy Montag on his journey from indoctrinated fireman to becoming a keeper of the knowledge he once swore to destroy. In its essence, Fahrenheit 451 is a story that shows how curiosity and the thirst for knowledge can have a very powerful impact on an individual as well as have lasting effects on the world. By studying the many symbols in this story and how they relate to each other, the reader can join Montag on his journey and, perhaps, leave with his or her own great thirst for knowledge.

Books are a symbol of knowledge, power, and freedom in the novel. Through fiction and non-fiction alike, books can help teach about the way the world works. If the powers that be in the novel want to keep the masses from learning, what better way than to take the books away? According the Faber, the only way a regular person can learn is by books (Bradbury 82). An ignorant population will have no power to change things when they become problematic. The authorities allow the media to flood the people’s minds with trivial information, making them feel like they do not need to learn anything. This allows the media to tell the people what to think and how to feel about things. When Montag meets Granger and the other scholars in the woods, they are watching footage of the police searching for him on a small portable television. Granger tells Montag that the police will find somebody else to identify as Montag to “save face” (Bradbury 142). They kill an innocent man just to say that they caught him. There is no question in anyone’s mind that it is Montag. If the people are able to read and learn new information, they would be free to think for themselves. People would be able to investigate and ask questions. They would be free to make difficult decisions that affect themselves and others on a greater scale. The authorities would not have control over citizens’ minds. An informed populace is one where people can think for themselves and make their own decisions.

The authorities prevent society from being informed by burning books. The act of burning books represents the death of knowledge and free thought. The words in these books are the thoughts and ideas of people throughout history. Many of these authors risked everything to share these ideas with the world. These ideas instill emotional responses in people from joy to anger, and have inspired many to act. When the firemen come to burn the old woman’s books, the woman says, “’Play the man, Master Ridley; we shall this day light such a candle, by God’s grace, in England, as I trust shall ever be put out’” (Bradbury 33). When Montag questions Beatty about her words, he states that it was a quote by a man named Latimer as he and a man named Ridley were burned alive “for heresy” (Bradbury 37). These men were burned for their beliefs because they did not go along with the norm of the time. The woman chooses to burn herself alive with her books rather than live a thoughtless life of conformity.  Montag burns Captain Beatty alive due to his desire for freedom. By burning the books, the firemen are effectively taking away the ability for a person to think for himself or herself. Captain Beatty believes that, by getting rid of books and allowing the media to fill people’s heads with trivial information, the world is a more peaceful place (Bradbury 58). Of course, the reader knows that this is not the case. With the deaths of the old woman and Captain Beatty, as well as the war that is brewing in the background, this world is far from peaceful.

The bomber jets represent fire as a symbol for the destruction of society. The people of the town know that there is an impending war on the horizon. This is alluded to by the mention of the black bombers multiple times through the novel. Mrs. Phelps mentions that her husband has been called to war and will return within two days (Bradbury 90). This situation does not seem to bother the Mrs. Phelps at all. There is also a broadcast that states explicitly, “War has been declared” (Bradbury 119). All of these allusions to war are being made, yet not one character in the novel is concerned about this looming threat. When the war comes, the bombers destroy the city. The bombs kill the people of the town destroying the society that Montag was running from, leaving he and a small group of academics and scholars to rebuild it.

Fire is also a symbol of survival in the novel when Montag burns Captain Beatty and the hound. These two characters have caused Montag a lot of problems throughout the novel. By burning them and escaping the frenzy, Montag has assured his survival (Bradbury 113). It represents hope for the survival of knowledge when Montag meets with Granger and the other academics in the woods surrounding a campfire. George Slusser states that Montag is “saved” by this fire because it does not burn anything (Slusser 1977). Up until this point, Montag only knew of the destructive nature of fire, but this fire was different in that it made him feel safe. David Mogen states in his essay, “Chapter 8: Fahrenheit 451”, that Montag’s meeting these men by the fire helps him to experience “the warmth of genuine community” (Mogan 1986). These men are following the same path that Montag, himself, is, by memorizing as much information as they can to share with the world as it is rebuilt after the war. They want to build a stronger, more knowledgeable society. Granger confirms this when he says, “We’re remembering. That’s where we’ll win out in the long run” (Bradbury 157). He implies that the only way to get better is to remember what has already happened.

Technology in the novel symbolizes the fear prevalent in this society. The mechanical hound is the exact opposite of the idea of what a firehouse dog should be. The image of a firehouse dog invites thoughts of a helpful and loyal animal. This is an image that many do not fear. The mechanical hound is depicted as a spider-like robot with a proboscis that comes out of its foot that injects poison into its victims. This mechanical hound is a cold, robotic killing machine. Wayne Johnson states that the mechanical hound represents the “relentless, heartless pursuit of the state” due to it getting closer to Montag as he becomes more curious (Johnson 1980). It is a strong reminder of what can happen if the firemen discover that a person is hiding books. This creature instills fear in Montag from the beginning of the novel when it shows aggression toward him in the firehouse (Bradbury 23). This makes him wary of the hound. One would think that the mechanical hound would be on Montag’s side, seeing as he is a fireman. Instead, this mechanical hound becomes a source of paranoia for Montag, forcing him to always be on guard.

Technology also acts as a symbol for the lack of emotional connection in the novel’s society. The constant barrage of noise and images from the televisions and sea shell ear pieces keeps people from communicating with each other. Citizens are, in a sense, brainwashed by commercials. This is apparent when Montag is on the train and the advertisement for Denham’s Dental Dentrifices begins to play. Everyone on the train, including Montag, begins to react to the tune. The narrator says that people are “pounded into submission” (Bradbury 75). By subjecting the population to mindless jingles and products, the powers that be can steer them away from more important things. They will have nothing to talk about except what they are fed.

Mildred represents the complacency and disconnection in the society that the characters of the novel live in. She has a ceaseless need for entertainment. She is constantly watching television or listening to her Seashells. She is one of the happy people who Beatty mentions when discussing the problem with books (Bradbury 58). The people in this society are very distracted. According to John Huntington, the taming of society in the novel is due to people not having access to the “traditional culture” that books contain (Huntington 1982). The people aren’t able to read books, so they are glued to the television. They are always entertained, but at what cost? The television walls almost turn the room into a colorful and noisy prison. The people’s hunger for entertainment and spectacle allows the powers that be to keep them pacified.

Mildred is unable to engage in thoughtful conversation. When Montag tries to talk with her, Mildred shows little interest in what he has to say. Mildred has more “meaningful” conversations with her interactive televisions than she does with her own husband. This is shown when Montag tries to speak to Mildred about her overdose of sleeping pills. When asked why she did this, she refuses to believe that she would have done it (Bradbury 17). She does not even remember that this event took place. The paramedics who came to revive her reveal that this type of thing is becoming a common occurrence.  Montag’s reaction to this situation shows that he is beginning to feel alienated by this society. Everyone is isolated from one another. Nobody is communicating how they feel to each other. Mildred even mentions that the people on the television screen are her “family” (Bradbury 69). The people in the novel just seem to exist. They are living lives full of entertainment and excitement, yet none of it means anything. For how “full” these people’s lives are, they are not nearly as happy as they seem.

Captain Beatty is a symbol of authority in the novel. He has an advantage over a majority of the population in that he knows that, despite his resentment for them, books are useful tools for learning. He is well versed in the subject matter that he helps to destroy. He uses this knowledge when he visits Montag at his home in an attempt to deter him from thinking about why books must be burned so that he can come back to work. Beatty explains to Montag that books make people feel things.  He believes that, since the characters in the books are not real and the writers are dead, books are a large waste of time. He sees them as a source of conflict (Bradbury 59). Rafeeq O. McGiveron says that Beatty “shows how intolerance for opposing ideas helps to lead to the stifling of individual expression, and hence of thought” (McGiveron). Beatty feels that people are better off when they do not have to think for themselves. If people are able to feel things from reading, they will begin to ask questions and think for themselves. If they can think for themselves, the powers that be will no longer be able to control them.

Beatty also represents the media in the novel. When he speaks to Montag about the books, he says that people are happy spending their leisure time watching television. They just want entertainment. They crave happiness, so the media keeps them from worrying (Bradbury 56). He explains to Montag that it is not the government that wanted to do away with books, but the people. In a way, he uses tactics of the media by giving Montag information in an attempt to misdirect him. This only seems to make Montag even more curious about the content that books provide.  Beatty becomes a part of the entertainment when he is killed by Montag.

The firemen represent censorship in the media. When they find out that someone has books, they come and destroy the books. It is a job for them, and they do it without question. The firemen are almost like soldiers. Jack Zipes equates the firemen to Nazis in relation to their uniforms and their burning of books (Zipes 1983). They take pleasure in this job. At the beginning of the novel, Montag is depicted wearing a smile as he torches the books (Bradbury 2). The firemen’s role in society is reversed in the novel. The men who once protected people and put out fires are now the men who start the fires. They burn the information that the people in power do not want the masses to have access to. This keeps the population from thinking for themselves, thus allowing them to be controlled. Without any other information available, there can be no resistance. People only receive the information that the government wants them to receive.

Clarisse is a symbol of curiosity and friendship. From the beginning of the story, she stays in Montag’s mind. She asks Montag a lot of questions, like if he is happy. These questions bother Montag, because he has never really thought about the answers before. Clarrise states that Montag always appears “shocked” by her questions (Bradbury 26). She makes him think about trivial things. She mentions why advertising billboards along the side of the road are as long as they are. These sessions of questions and answers leave Montag dumbfounded, but he begins to grow attached to Clarisse. According to Peter Sisario, Montag’s attachment to Clarisse was “sincere and true in a world hostile to honesty” (Sisario 1970). Montag sees Clarrise as a friend and enjoys his meetings with her. When she disappears, Montag begins to ask questions. Mildred mentions that Clarisse may have been killed and Captain Beatty confirms this later on. Her death inspires him to think and act despite the consequences.

Clarrise also represents and ideal society in which communication and connection are welcome and essential. She and her family sit and talk instead of watching the television screens. Clarrise’s questions may bother him at first, but Montag becomes genuinely curious about Clarisse and her family. Montag shows this by looking to Clarrise’s family’s home and wondering what they would have to talk about. Clarrise’s family has something that Montag is lacking in his own. With the thoughts of Mildred’s overdose and the old woman burning herself alive, Montag needs to speak his mind. Clarisse unconsciously forces Montag to acknowledge this fact when she states, “People don’t talk about anything” (Bradbury 28).

Montag is a symbol of change in the novel. He begins as a loyal fireman who takes pride in his work. He has no problem performing the firemen’s duty of burning books. He is afraid to think for himself. When he thinks about whether or not he is happy he is very unsure if he really is (Bradbury 8). The narrator reveals that Montag has a secret stash of books hidden away in the ventilation system of his home that, if found, will get him into trouble. The deaths of Clarisse and the old woman cause Montag to become curious of what is in the books. This is shown in the woman’s house, where Montag steals a book from the scene before the woman burns it down (Bradbury 35). He may have had books hidden away at home, but this is the first time when Montag shows interest in what he is hiding. He decides to share his secret with Mildred, and shows her the books. This frightens her as she worries about the consequences of their being caught. He shows courage by reading the book aloud while the mechanical hound searches around his home. Clarisse has given Montag motivation to ask questions and to burn the firemen. He changes from a proud but cowardly fireman with a secret to a man of action who desires to think and ask questions. With the war over and the media out of the way, Montag’s journey of self-discovery ultimately becomes a quest to share information with the world. He and the other professors will work to share their knowledge in a hope to build a better world. Each person in the group has memorized at least one story or part of a story. Their efforts may help prevent history from repeating.

Curiosity and the search for knowledge can have a very large impact on the world. The sharing of knowledge helps society to grow and become better and more open to change.

Asthma, Lung Cancer and Sinusitis: online essay help

Asthma

This is a common illness that causes the person to cough, wheeze, the chest can become tight and they may experience difficulties in breathing.  It can develop at any age and for people who developed it in childhood they may grow out of it in their teens.  If managed properly it doesn’t affect the quality of life.

Causes

It is not fully known why someone can develop asthma, but there are a number factors and for some people it may be a combination of reasons, such as:

• There’s a family history of asthma, eczema or any allergies – for example, evidence shows that if one or both parents have asthma, you are more likely to have it.

• You have eczema or an allergy, such as hay fever (an allergy to pollen).

• You had bronchiolitis (a common childhood lung infection) as a child.

• You were born prematurely, especially if you needed a ventilator to help you breathe after birth.

• Your birth weight was low because you didn’t grow at a normal rate in the womb (this can be caused by various factors).

• Your mother smoked while she was pregnant.

• One or both parents smoked whilst you were a child.

• Spent prolonged periods around people who smoke.

• Exposed to certain substances at work (known as occupational asthma).

• Hormones can affect asthma symptoms, some women first develop asthma before and after the menopause. (asthmauk.net)

Asthma is a result of the bronchi becoming inflamed and sensitive, when the sufferer comes into contact with a certain substance or particles that irritates the lungs this causes the airways to narrow, the muscles tighten and phlegm to be produced.

There are a number of irritants (triggers) that can cause this:

• House dust mites

• Animal fur

• Pollen

• Cigarette smoke

• Exercise

• Weather conditions

• Medication

• Viral infections (NHS.Choice.net)

Occupational asthma – This type of asthma is a consequence of the substances you may be exposed to at work, the risk increases if you are exposed on a regular basis.  The most common substances are:

• Isocyanates (chemicals often found in spray paint)

• Flour and grain dust

• Colophony (a substance often found in solder fumes)

• Latex

• Animals

• Wood dust (NHS.Choice.net)

Signs & Symptoms

The symptoms can be from very mild to severe.  In most cases it is only mild and you can lead a ‘normal’ live.  In some cases the symptoms are severe and will need medical treatment.  The symptoms can be worse during the night or first thing in the morning.  The common symptoms are:

• Wheezing

• Coughing

• Shortness of breath

• Tightness in the chest (asthma.uk.net)

In some cases the symptoms can get severely worse and this is referred to as an ‘asthma attack’, for some people they can be prone to these attacks and they can come on unexpectedly.  They can develop slowly and it can take a few days before the full blown attack happens.

The symptoms are the same as above but are more intense and the usual form of relief may not work, and in these cases medical treatment will need to be sought.

If asthma is managed properly, you take the prescribed medication regularly and manage your exposure to the triggers then the symptoms will be very mild and in some cases you may be symptom free.

Lung Cancer

One of the more common types of cancer and usually affects older people, the cause is usually known, and the most common is smoking.

Causes

There are two main types of lung cancer:

• Small-cell lung cancer – named this because the cells are small.  Usual cause is smoking and rare for non-smokers to develop this form.  This type spreads fast.

• Non-small-cell lung cancer – the most common type, three types squamous cell carcinoma (affects airways), adenocarcinoma (developed in mucus producing cells), large cell carcinoma (found in large, rounded cells).

Lung cancer normally starts in the lining of the airway into the lungs the bronchi.  It is a result of some cells becoming abnormal and as they reproduce they produce more abnormal cells, which then form together into a lump/tumour.  In some cases the lump isn’t cancerous.  There are a number of causes the main one being smoking.

Smoking – This is the main cause.  Smoked produced from tobacco contains over 60 toxic substances that are cancer producing toxins.  Cigarette smoke isn’t the only culprit the following products also increase the risk of developing lung cancer:

• Cigars

• Pipe tobacco

• Snuff (a powdered form of tobacco)

• Chewing tobacco

• Smoking cannabis – Cannabis contains substances that are cancer producing toxins.  Most cannabis smokers mix the cannabis with tobacco, inhale more deeply and hold the smoke in their lungs for longer.   (NHS.Choices.net)

Passive smoking – Regular exposure to someone else’s cigarette smoke increases the risk of lung cancer.  Sharing a home with a smoker increases the risk by 25%.

Radon – This a natural radioactive gas which can leak out of soils and rocks in the form of uranium, usually safe outdoors, but can be found in some buildings where the amount can build up.  If breathed in regularly can cause lung cancer, especially if the person smokes.

Pollution and occupational exposure – Chemicals and substances used in certain workplaces can slightly increases the risk of developing lung cancer, these being:

• Arsenic

• Asbestos

• Beryllium

• Cadmium

• Coal and coke fumes

• Silica

• Nickel (NHS Choices.net)

Living in built-up areas increases the exposure to car fumes especially diesel fumes which can increase the risk of developing lung cancer.

Symptoms

At the start most people will have no symptoms, as the cancer develops the main signs are:

• Cough up blood

• Feel short of breath

• Have pain in your chest or shoulder

• Lose weight unexpectedly

• Feel tired

Less common symptoms include:

• Swelling of your face or neck

• A hoarse voice

• Broadening or thickening of the tips of your fingers (called clubbing)

(Bupa.net)

Sinusitis

An inflammation of the linings of the sinuses (air-filled gaps within the bones of the face, surrounding the nasal area).

Causes

The sinuses produce mucus which normally drains into the nasal cavity via small drainage channels.  When the lining of the sinuses becomes inflamed and swollen this can cause the sinuses and the nasal opening to become blocked.  Air is then trapped within the nasal/sinus cavity which causes the pressure to build up resulting in pain.  This is normally a result of a viral infection (cold).

There a number of types of sinusitis:

Acute sinusitis – a sudden onset of cold-like symptoms such as a runny nose, stuffy nose and facial pain, normally lasts about a week.

Chronic sinusitis – a condition characterised by sinus inflammation symptoms lasting more than 12 weeks.

Recurrent sinusitis – several attacks within a year. (Boots webmd.net)

Signs & Symptoms

The pain can occur in different areas around the nasal cavity, depending on which sinuses are blocked.  The symptoms include:

Acute Sinusitis:

• Facial pain/pressure

• Nasal stuffiness

• Nasal discharge

• Loss of smell

• Cough/congestion

May also include:

• Fever

• Bad breath

• Fatigue

• Dental pain

Chronic Sinusitis

• Facial congestion/fullness

• A nasal obstruction/blockage

• Puss in the nasal cavity

• Nasal discharge/discoloured postnasal drainage

May also include:

• Headaches

• Bad breath

• Fever

• Fatigue

• Dental pain

Other symptoms include

• A blocked or stuffy nose

• Loss of the sense of smell or a reduced sense of smell

• Green or yellow mucus, which can drain down the back of the nose into the throat

• A fever, particularly in acute sinusitis

(Boots webmd.net)

Fashion photographer Nick Knight

Today’s most influential fashion photographer Nick Knight works with the most well-known stars in the world. Knight was born in 1958 in London. He studied photography at Bournemouth & Poole College of Art and Design. We can say that he’s a fashion pioneer. He has photographed for Vogue, I-D and has shot music videos too because he’s also a filmmaker. Multitalented creative artists are a couple of words that can describe him.

When he figured out that photography was the profession he wanted to learn?

Night actually started first to take a medicine course but when he realized he was not good at it, he quit. Night took a gap year and discovered his passion for photography. He was addicted to it later he studied photography at the college. His parents were open-minded, liberal and supportive. Now they’re both retired his father was a psychologist and his mother physiotherapist. Knight had planned to follow them into science by studying medicine.

How his career has started and what does he think about his fame?

As an art student he brings his first book out. The book was called ‘Skinheads’ and was published in 1982. He can immediately arouse interest in the editor of i-D (a well-known magazine). Then he’s asked to shoot a series of 100 portraits for the 5th anniversary of the magazine. He absolutely couldn’t refuse that request. The black and white portraits were a resounding success. After his success he works with a couple of art directors. Thereafter the most famous fashion brands are queuing up to work with him include Calvin Klein, Dior, … He thinks that fame is a weird, abstract concept, and no one has to hunker after. The famous people he knows and works with are still just people he says. They are famous whether they want or not but it’s not necessarily beneficial to them.

How he started with fashion photography?

In 1933 he made fashion photography by adapting ring flash photography to capture a model for Vogue. He created an overexposed aesthetic. Photographs like these had never graced the cover. He was also recognized for his use of color and pigmentation. ‘The most brilliant thing about photography is that it’s a passport into any social situation whatsoever’: Nick Knight says. He has his own ideas from the concept of beauty. Continually he challenges conventional ideas of beauty. Night likes to pushes its limits, creatively and technically. He has no fear of taboos, it’s his passion film and photography. That’s why he created SHOWstudio.com in 200. It’s a platform for fashion lovers with his projects and films. He would make his work also accessible. Therefore he’s streaming live how he works. He thinks the internet made fashion magazines obsolete because they don’t have a purpose anymore. He doesn’t believe that people like to have a physical thing. He asks if they really want a shiny, smelly, floppy block in their hand. His answer: I’m not sure they do.

What you think about new technology in your photographs?

He’s most excited about the mobile phones. The phones are becoming our screens for understanding fashion, as much as our computers were, he said. He believes that mobile can give you access. It’s not the size of the screen or the quality but the emotional connection that a communication creates. He’s also experimenting with 3D scanning. He challenges the fashion world to work with these new technologies. ‘We’re not waiting for technology, technology is waiting for us.’: he says. But he still believes these two main things are what artist’s needs, your heart and your mind.

How do you proceed with the famous stars?

It depends: he says. Some people have an idea and some not. There are also stars with a team. They have a very fixed idea of what they want. It’s different for different people. He calls the collaboration with stars not collaborations. It’s more me trying to see life through their eyes: he explains. He likes to get into their psyche.

Do you have some artists that inspire you?

No. He likes to read books and that’s it.

You also photographed flowers can you tell more about that?

He took a break from fashion to work with Chipperfield on Plant Power. That was a new gallery at the Natural History Museum in London. People can explore the relationship between people and plants. This break inspires him to work more with flowers and nature in his work. The elegancy of the nature and the beautiful colors of the flowers. He also made another turn in his work to shoot with plump women, elderly women and people with disabilities. He challenged the fashion industry.

How do you work at your studio?

Knight has a team that works for him. But he is still the creative brain behind the photographs. He always shoots wearing the same outfit including a jeans and a white shirt. The industry criticizes him also because his overuse of post-production techniques. He says: Digital manipulation is just another tool. It’s less profound than the lens you use, or the angle. He never thought that photography shows the reality. It’s always manipulation of the reality.

What can we conclude…

Night had a major influence on the fashion world. He is a good example of a man who has worked hard for what he has achieved. He has his own vision and his own style. That distinguishes his own with the rest. That makes him unique and he continues to inspire people.

The discourse of Steinbeck: essay help free

Steinbeck celebrates the fragmentary nature of his society, he believes that these differences have fused together to build the main strengths of his nation, thus, we need to preserve people’s right to be different. We should pause for a while to consider that any totalitarian system spreads sameness everywhere to make the mission of controlling them easier. Anything that distinguishes itself is unacceptable and should be eliminated. Steinbeck works on reversing the attack and subverting this logic, he leads a movement of decentralization; it’s a movement that aims at shaking the certainties as it gives back the value to the marginalized categories of the American society. This reforming movement invokes a continuous war between the authoritarian systems and the peripheral enemies of them. Steinbeck wants to distort the binary opposition of good and bad, blurring these lines that separate people will shorten distance and guarantee an efficient communication among individuals.

This research study investigates the means by which the mainstream culture imposed its norms, beliefs, and assumptions on the wide majority of the American society. By so doing, Steinbeck intends to reread and decode these means. Power uses the concept of common interest as an excuse to legitimize its corruption. What Steinbeck has focused upon is the way authorities use the power of language in identifying concepts for people, for instance it defines for them what is happiness and what is comfortable life. Falsifying truths and trying to subordinate people’s minds and control their choices through media. Steinbeck is writing a text that answers totalitarianism back and calls for cultural resistance. Being part of the community, any kind of change or intellectual revolution demands from all of us a kind of involvement. Together we can change laws and rules overnight, we can reconstruct our reality, and we simply need to dislodge the myth that power is undefeatable.

The discourse of Steinbeck is counter hegemonic; he serves as a watchdog against any threat menacing his society. Steinbeck doesn’t want his people to be hypnotized by the discourse of those in power. Spreading consciousness amongst common people is really important. Democracy is built upon awareness and not upon educated elite leading a blind mob. The role of every writer is therefore, to elevate the level of thinking of his people; he should not allow power to use hegemonic speech to control people. Thus, the role of an intellect is to speak truth to power. This novel asserts the prophetic function of the writer as a truth revealer and as a catalyst who causes agitation, and who brings things together to create new visions and motivates the social change. A novelist is the memory of its nation; his novels should insight people to act, its importance lies mainly in the unsaid, thus, readers have to be able to decode it. POWER DISCOURSE LANGUAGE AS A TOOL OF RESISTENCE BY SUBVERTING POWER DISCOURSES FROM WITHIN.

The capitalist system represents a major threat for the real American principles of brotherhood, equality, and human rights. This danger that threatens the unity and solidarity of the American society and the next generations. Allen and Mary Ellen for instance, is the product of consumer society, they adopt all the practices and beliefs of modern life. From the priorities of the capitalist society, there is something missing; religion, yet Mary seems to keep the presence of holiness in her life.

Ethan is the breadwinner of his family, he is astrocized from society. He masks is weakness by resorting to language games; language has been dislocated, it doesn’t have the power to define anymore, everything he says is blurred and vague. He also escapes reality by talking to himself, he minimizes any problem, it seems to bother him no more. Ethan is not in harmony with his world, that’s why he uses madness as a refuge, he is unable to decide or frame his identity. Ethan’s problem is a problem of recognition; he feels that his society doesn’t recognize him as an autonomous individual after losing his ancestors’ wealth. Ethan has a double discourse, he subverts what he asserts; if he succeeds then it’s his effort but if he fails it’s always someone else’s fault. He always gets rid of the blame. Ethan asserts the supremacy of rich people; as if empirically proved and not to be negotiated or doubted.

Steinbeck couldn’t have accomplished such an amazing fictional work without being inspired by the agonies lived by his people; I assure therefore that the majority of his well known characters are nothing but a reflection of real life people. Yet Steinbeck skills appear when he combines these linguistic skills of humor and irony with his sense of responsibility towards his nation. Steinbeck foregrounds himself and defamiliarizes his work What immortalizes a fictional work is its language; this novel will never wither, it will always please its readers because it’s beautiful in itself.

The American society is internally divided, this division is based mainly on economic features, for this reason Steinbeck refuses the classification that categorizes people into first class and second class citizens on the basis that some individuals are economically underdeveloped.

Any dictator system sheds light on trivial matters in order to deflect attention away from core problems and mainly from criticizing its regime. Food prices, safety matters, terrorism concerns, celebrities news, and media outlets, are all made by authorities to disguise people’s attention from real problems.

Ethan is giving himself a sense of nobility, honor, and value; he sometimes adopts the mainstream thinking. He is trying his best to adjust himself to the harsh conditions imposed by the mainstream culture of the American society. Ethan has a very minimalist and reductionist reading of his history and his current situation. Transvaluating, rewriting, and rediscovering oneself.

Now I want to argue that this idea has a great importance in the issue I want to discuss in the second part of this chapter. These characters have been mutilated by the aspects of modern life, they became assimilationists. Malehood is related to wealth and no longer to maxulinity and bravery. The end justifies the means in a world of race and brutality. AIMLESSLY DIASPORIC PEOPLE

Mary doesn’t seem to side with anyone in the family, she fights fragmentation and she longs for unity and harmony in her house.

Religion rationalizes death with rituals, it doesn’t solve the problem. The world of death for Ethan is still undiscovered AN THIS SHOWS THE LIMITAION OF THE HUMAN MIND.

Literature is about the shame of being a human being.

We are ruled by an idea rather than physical power, for any kind of tyranny has its own ideological machine that works to spread its ideas and maintain people under control.

For Steinbeck it’s a matter of commitment to make his people’s voice heard.

Legitimization of power use / it transcends/ I believe that there are many problems with this statement, the first is…/ / it seeks a way to../  It disallows / This discourse seeks to dismantle /

Every piece of fiction Steinbeck wrote is a protest in itself.

Some would say that up siding social classes remains a utopian idea

AMERICA EXCEPTIONALISM/ THE CONCEPT OF THE SUBALTERN/ CULTURAL CAPITALE IS STEINBECK MAIN FOUS / Let’s use Derrida’s discourse analysis

The shortest road to the future is always through studying the past and learning from its fruitful experiences.

The self always gain value, identity, and uniqueness by setting itself off from the Other and by showing that it’s morally at a higher level. For this reason, every character tries to dehistoricize and desocialize him. Understanding the Other, therefore, is only possible if the self could in one way or another control the perspectives, assumptions, and ideologies of his own culture. ETHAN AND MARULLO

International business and the TRIPS Agreement

“International business is claimed to be as old as the history of mankind itself” Even Hammurabi’s Code of laws contained some rules which are related to business law in 2000 BC. Another example is The Rhodian maritime code or Lex Rhodia was introduced by the Mediterranean community, in the second or third century BC, which was used by Greeks and Romans, is a renowned code which on record incorporated within its tenets the principle of maritime law and maritime insurance.

A business is making goods or providing services by any organisation. Goods are physical products such as phones, cars but services are non-physical products such as cleaning services or security services. According to Companies House a record breaking 581,173 businesses were registered with Companies House last year showing an accelerated increase on previous years with 526,447 and 484,224 recorded in 2013 and 2012 respectively. This statistic shows that how the business develops amazingly in the local area. Another character of business situated international business which across the international borders, Robock and Simmonds defined international business as “international business as a field of management training that deals with the special features of business activities that cross national boundaries”. Also Daniels and Radebaugh define international business as “all business transactions that involve two or more countries” it is possible to divide up international business into three categories trade, investment (e.g. direct or indirect), intellectual licensing of technology and intellectual property (e.g. trademarks, patents, and copyrights) such as well known brands are McDonald’s, Nike.

There is various kind of sources of international business law. The role of international business law is to assist in the conduct of international business. Article of 38 of the Statue of the International Court of Justice (1945) illustrates of the source of international law which are, International conventions, international custom or general practice, general principles of law recognized by civilized nations and judicial decision and scholarly writing. Related with international and international business, the law exists to create reliable standards for companies to follow**. It is essential to establish an order in international business law and it comes to mean one purpose of the international business law is establishing standards, being said that It is the standardization of fundamental business practices worldwide. In international relations more significant international agreements are called treaties. Such as the Agreement on Trade-Related Aspects of Intellectual Property Rights (1994) and known as TRIPS Agreement. TRIPS Agreement serves to standardization of intellectual property. David Nimmer described TRIPS as “the highest expression to date of binding intellectual property law in the international area”. Additionally, this kind of treaties serve to minimize the risk because every international business transaction might have carried some risks. Also, the purpose of international business law tries to minimize the possible risk with TRIPS Agreement in the area of intellectual property. For instance, a company which enters a licensing agreement in a country which is a member of TRIPS Agreement in willing to enter a business in a foreign country. The problem might have occurred here because it is important to know that intellectual property is under protection in that country or easy to be disclosure by the rivals. If the company decided to enter a business in a foreign country which is member of TRIPS Agreement, risk on company’s shoulder will decrease but in a foreign country which is not a member of TRIPS Agreement, it is possible to withdraw to enter a business because of possible risks as mentioned above. Especially this problem might occur in less developed countries and TRIPS Agreement creates a balance between developed countries and developing countries. Ratification of the TRIPS Agreement has a significant role in intellectual property rights’ reforms in developing countries. TRIPS Agreement constitutes minimum standards for the protection of intellectual rights such as copyright, trademarks, geographical indications, patent and industrial design, integrated circuit layout design and undisclosed information (known as trade secrets or confidential information). Even ratified TRIPS Agreement is one step, a condition for international peace. Closely related to this point, in article 67 regulates technical cooperation between members, especially between develop and developing countries, one of the highest priority of the Agreement. Also, one of TRIPS Agreement’s feature is including about Dispute Settlement. Disputes arising under the TRIPS Agreement will involve an examination into the consistency of a Member’s domestic intellectual property laws and the TRIPS Agreement. Dispute settlement related to intellectual property,  sixteen countries apply for the dispute settlement so far and the most countries who applied for dispute settlement are European Communities with seven disputes, the USA with six disputes and Australia with eight disputes, although, as less developed countries, only Pakistan, India and Indonesia applied for the dispute settlement. According to information from WTO, mostly dispute of TRIPS Agreement appeared between highly developed countries. This essay will examine the scope and purpose of the international business law, and in this respect it is hoped to explain how international business law achieved its any purpose with TRIPS Agreement is related to intellectual property.

Russian Federation Armed Forces (RFAF) getting in control of Crimea

Factor Deduction Conclusion

Russian Federation Armed Forces (RFAF) getting in control of Crimea (March 2014).

• Continuing access to the naval base of Sevastopol

• Warships and support vessels formerly part of the Ukrainian navy could enter into the Russian Fleet

• Securing the military benefits and regaining influence over Ukraine’s future direction.   • Avoid further degradation of the crisis in Ukraine

• Continue to implement assurance measures

• The possible “integration” of Ukraine to the West (EU/NATO) is much less attractive by the Russian presence in Crimea.

• The presence of the Russian Black Sea Fleet in Sevastopol (Crimea)

• Modernisation of that fleet (advanced supersonic anti-ship cruise missiles, air defense systems and torpedoes; the coastal defence system armed with Yakhont anti-ship missiles. )

• Presence of Iskander mobile ballistic missile systems in Crimea The Black Sea Fleet (after its modernisation ), shall provide Russia with substantial operational capability in the region:

• to control the Black Sea basin

• to ensure the security of its southern borders

• to project power in and around the Black Sea

• to carry out Anti-Access/Area Denial (A2/AD) operations throughout the region. • Continue its intelligence collection efforts, its investments in and deployment of missile defence systems and submarine detection systems

• Undertake negotiations on non-proliferation

• Increase its presence in the Black Sea by ‘show the flag’ operations, also being deterrent at the same time

• Support Turkey’s surveillance of the Turkish Streets.

Crimea as an operating base for future military action against Ukraine :

• Naval means

• Advanced combat aircraft • Russia can threaten Ukraine on three fronts now (northeast, southeast and south)

• Potential naval blockade against Ukraine’s southern ports and potential amphibious operations at selected coastal targets

• Deep operations inside Ukraine to strike strategic targets or provide ground support for Russian forces and interdict Ukrainian troop movements • Avoid further degradation of the crisis in Ukraine.

• Start negotiations with the Russian Federation.

• In order to change the relationship of distrust to a normalised one, NATO should involve the Russian Federation instead of excluding them from their operations.

• Air defense capabilities upgraded

• Integrated air defense system (S-400 area defense platform installed) • Significantly enhancing Russia’s air defence capabilities on its southern flank.

• Deterrence • Increase its presence in the Black Sea by ‘show the flag’ operations

• Capabilities needed to counter deterrence

Relative geographic isolation of Crimea

Very difficult for enemy forces to retake it (easy to defend). Crimea: a lost “case” for Ukraine

=> The Alliance should start diplomatic negotiations with Russia (improve their relationship and solve the crisis).

Sevastopol serves as headquarters to the newly constituted Mediterranean Task Force (MTF). • Russia‘s reach extended and its prestige enhanced. (again a major power in the Black Sea)

• Sevastopol: potential SPOE (strategic projection) • NATO should show its presence in the region like they already do in the Baltic States

• Intelligence collection on the MTF

In Crimea, all the prerequisites to apply the new operational concept in an effective way were present. Russia’s strategy is to maintain a sphere of influence in former Soviet states and at the same time disrupt EU and NATO involvement in the area.

The Russian annexation of Crimea (March 2014) was the result of a combination of military tools and state tools to reach its policy goals:

• Covert use of Special Operations Forces with subterfuge (civilian self-defence forces)

• Combination of non-military, covert and subversive asymmetric means (hybrid warfare)

• Gradually transitioning from “little green men” and self-defence forces to clearly marked high readiness forces • Confronting Russian military power in the future will require an expanded toolkit for NATO Allies.  (confrontation with an asymmetrical approach – hybrid warfare)