Four Models Of Addiction (Biological, Disease, Family And Moral Model)

In counseling others who are struggling with addiction, it is important to have an understanding how why and how they became addicted. There are various theoretical models that explain the underlying factors that lead to addiction, therefore it is difficult to only choose one single theory to best explain it. For many counselors, integrating different models can better help explain various causes of addiction, especially since each client has different experiences. In studying addictions in this course and attending Alcoholics Anonymous, Narcotics Anonymous, and Al-Anon meetings, I can see how the integration of models is needed for each individual. Although there are so many models to incorporate, four models will be discussed in this article: Biological, Disease, Family, and Moral Model.

Part I: The Four Models

One of the models of addiction is the Biological Model. This model presents that addicts are constitutionally predisposed to develop a dependence on substances, and even genetics can contribute to the likelihood of addiction. (Capuzzi & Stauffer, 2016). The Biological Models also discuss how the limbic system of the brain will change in chemistry when substances are used (Capuzzi & Stauffer, 2016). For this model, when substances are taken, the brain chemistry changes where there eventually become dependent on the substances. When the body is without the substances, withdrawal symptoms and negative affect components can occur (Potenza, 2013). An example of addiction being best explained by the biological model is when I heard a story shared through Narcotics Anonymous online. When this individual shared their story, they explained that even though they tried to quit using drugs, it was difficult because they would experience such pain with headaches, nausea, and would get terribly shaky. They explained that doing drugs was no longer an enjoyable thing to do, but something they had to do in order for their body to quit feeling miserable with the side effects. This fits the biological model because the individual felt that their body was chemically dependent on the drugs.

Another model of addiction is the Disease Model. This model implements that the individual is inflicted with the disease of addiction which cannot be cured (Capuzzi & Stauffer, 2016). This view is the sole model for Alcoholics Anonymous, where members acknowledge that they have a disease and find a way to “arrest” it through attending Alcoholics Anonymous (Alcoholics Anonymous World Services INC, 1970). Members of AA, in view of the disease model, believe that their disease of alcoholism is incurable and that it is a constant battle against it. It is a battle that they cannot fight alone, and with the help of a Higher Power they can manage their disease (AAWS, 1970). One story that I remember when I attended AA is a gentleman who was in and out of prison because of the disease of alcohol. He expressed how the disease is always there and will never go away, even though he spent time in prison away from alcohol. In his case, time away was not a factor with his disease, but more of the will to fight against it when he got out of prison.

The third model of addiction is the Family Model. This model recognizes that families play a role in how a person becomes an addict, and even how they have difficulty getting sober because of the influence in family. Families can reinforce the behavior of the abusing member, or can feel threatened if the abuser wants to recover (Capuzzi & Stauffer, 2016). The Family Model can also include the fact that the entire family could have a disease or disorder, and the entire family seeks counsel (Capuzzi & Stauffer, 2016). Additionally, the Family Model of addiction also exudes that addicted family members can cause great pain and suffering affected family members (Orford, Velleman, Natera, Templeton, & Copello, 2013). In an example of the Al-Anon meeting I had attended, a woman talked about how her husband becomes verbally abusive when he drinks, and she has contemplated numerous times if she should leave him. The effect that alcohol has on their relationship is causing great strain, so the issue goes beyond just the abuser being addicted.

The final model of discussion is the Moral Model. This model presents that substance abusers are choosing to abuse because of a personal choice, ignoring what is right and wrong or acceptable to unacceptable (Capuzzi & Stauffer, 2016). The abusers are viewed as suffering the consequences of their choices and not because of other factors such as genetics, family systems, etc. (Capuzzi & Stauffer, 2016). The first personal experience of shame that an addict has (usually a social and moral emotion) is important in understanding their addiction and contributes to their motivation for change (Pickard, Ahmed, & Foddy, 2015). An example of the moral model is when I listened to a story at AA where a woman described that although she knew getting drunk every day was wrong, it was so hard to stop. She ignored her internal convictions and continued to drink anyway, even though it made her feel terrible about it.

Part II: Member’s Story

One story that I found very moving was when I visited Alcoholics Anonymous in Kalamazoo, MI. It was actually my first visit to a substance abuse support group. The woman who shared was in her fifties, looked very well put together with nice hair and clothes. She started out by saying that she was 26 years clean, and started attending AA when she was just 26 years old. She said her problem with alcohol began after she was of legal age and could drink at her leisure. She said that she loved going out to the bars and being social, but soon that took a turn for the worst. She mentioned that her problem wasn’t that she couldn’t keep away from alcohol, she could actually go a week or so without it – it was that once she started, she couldn’t stop. She said that she would have close to 30 beers in a single binge. When she would drink, she would go into a rage and be an angry drunk. She said at one point after drinking, she chased a man down the street shooting a gun at him because she was angry about something she can’t remember. She said even then, she didn’t believe that she had a problem or thought she was doing anything wrong. She explained that that is the problem with alcohol, is that it is always a part of her and will never go away, and she knows that if she were to take that first sip that she would spiral out of control. She did talk about alcohol being a monster of a disease that is always ready for her to take that first drink. Eventually, she said that she realized that her lifestyle was affecting her children and family and she decided to attend AA. She said that since then, she has been 26 years sober and feels that she has the

This story can have different models, but out of the models discussed, the ones that come to mind are the Moral Model and the Disease Model. The Moral Model is fitting because when she was doing her binging, she never thought she was doing anything wrong and had no concept that it was hurting her or her family. It wasn’t until she realized how wrong it was that she began to go to meetings and start changing her life. Through the Moral Model, she recognized the consequences of her actions and made the decision to stop drinking. The Disease Model of addiction is a part of her story as well because she explained how her alcoholism will always be a part of her life, such as a disease. She said that she could go without drinking, but once she started she couldn’t stop, which is why alcoholism viewed as a disease is fitting for her case. It didn’t matter how long she went without drinking, the disease of alcohol is always waiting for her.

In summary, no single model of addiction is the go-to for all understanding of addiction. Each story has its own dynamics and contributors, therefore multiple models can be used to describe a person’s reason for becoming addicted. It is important to understand all of the various models so that appropriate treatment plans are created and individualized to meet the addict’s needs for recovery. Research, attending meetings, and continuing education can help become more competent in understanding addictions for counseling in the future.

2019-3-4-1551671286

Gun Control – The Unforeseen Dangers of Unchecked Firearms

To say that guns are dangerous and need to be controlled is putting our situation mildly. Every year, 36,000 Americans die from guns, that is nearly 100 people killed every day (America’s Background Check 2019). The American situation is dire, with the need for reform on gun control increasing everyday. School shootings, gun related homicides, and gun related suicides can all be reduced or avoided with the help of more complex legislation. The second amendment needs to be amended as militias are not relevant anymore, and therefore, making it constitutional to pass advanced legislation to have more comprehensive background checks and restrict the American public from buying assault rifles.

Gun control has been an incredibly controversial topic for years, with many affected from gun violence every year and the need to be protected in one’s own home, many have clashing opinions. These opposing viewpoints have created many organizations that are for and against gun control. These organizations include the Brady Campaign which fights for more comprehensive background and checks and the National Rifle Association which want to keep the second amendment the way it is. These values have spawned many arguments about the pros and cons of guns, and if they need to be taken away from the hands of civilians. As stated earlier, many people are affected by gun violence, with mass shootings becoming one of the ultimate causes of fear. On December 14th, 2012, A shooter opened fire inside Sandy Hook elementary school. Adam Lanza, the shooter, fatally shot 20 children and adults. The police investigated what could have caused this to happen, as it turned out, Lanza had several mental health issues (Sandy Hook Elementary 2012). He also had access to these deadly weapons and mixed with his mental health, he become disturbed enough to become a shooter. Gun control activists say that this is why we need to restrict the sales of guns, meanwhile, pro gun activists say that this is why we need more guns to protect people from shooters.

The second amendment is a vastly outdated part of our constitution, as it merely states, “A well regulated Militia, being necessary to the security of a free State…”. In 1791, when this amendment became part of the Bill of Rights, there was no guaranteed safety by federal officials. People were forced to protect themselves from foreign and local threats. Yet, we now have organized police forces and a very powerful military, ensuring our safety from many perils. However, most militias do provide a sense of security for some local neighborhoods and families. Many patrol the Mexico-America border, stopping illegal immigrants from crossing (Bauer 2016). Yet, since militias are made up of private citizens, the government does not sanction their actions or beliefs. In fact, many of these groups are anti-government. A powerful and influential militia organization the Three Percenters warned, “all politics in this country now is just dress rehearsal for civil war” (Nuckols 2013). Such a rhetoric has been embraced by many other Militias, with many calling for training against government threats. These militias are afraid that the American government will/have been invaded by foreigners and will force them to give away all of their firearms. It is argued that the second amendment was created to keep the government from becoming tyrannical and dangerous. Although such an argument is very hard to make, considering the United States military is much more advanced and better equipped. Some of these militias have more discreet goals, such as instilling fear in the hearts and minds of immigrants. For example, three Kansas militia men were convicted of plotting to bomb a mosque and the homes of Somali immigrants. Luckily, they were thwarted by another member of their group who ended up telling authorities about the planned attack (Kansas militia men 2019). These groups do keep some people feeling safe and protected, yet, they instill fear in those who do, in fact, believe that these groups are legitimate in their goals and claims.

Complex legislature and revision to the second amendment would prove to keep Americans much safer than they already are. Among these emendations would be more complex and universal background checks. As stated by the Brady Campaign, “97% of Americans support an expanded background check system” (America’s Background Check 2019). A major error in our background check system is the private sale gap loophole. Private sellers do not need to use background checks, in fact, 1 in 5 guns are sold by private sellers, avoiding background checks completely (America’s Background Check 2019). Such a loophole can put american in serious danger, and closing this loophole would be welcomed by the majority of America. Currently, the Brady Campaign is fighting to keep Americans safe, but is met with resistance from groups such as the National Rifle Association (NRA).

Assault rifles in the hands of untrained, unprepared civilians is a fatal mistake for America. Assault rifles are fun to shoot for some, yet, many of the same guns are used by the United States military (Cook, Goss 2014). Such powerful weapons must be kept away from the untrained hands of the American public. Assault rifles, used by the military, are created to inflict the most amount of casualties the fastest. While many are not automatic and only fire one bullet every time the trigger is pulled, they are still incredibly dangerous (The Gun Control Debate 2019). Compared to various pistols and handguns, these assault rifles have much more power and many have additions, such as silencers, that can be added onto them. Such weapons include the AR15, a deadly variation of the military’s M16. The M16 is a common gun used by the United States government, meanwhile the AR15 is the civilian version. This gun is a semi-automatic weapon, however, it is legal to add a “bump stock”, effectively turning a semi-automatic weapon into an automatic assault rifle. In fact, in 1994 president Bill Clinton signed an assault rifle ban, which prohibited guns such as the AR15. In the following years, the amount of mass shootings did drop, however, they did not end (Myre 2018). Unfortunately, the ban expired in 2004, allowing the American public to buy these weapons of mass destruction again.

2019-3-6-1551913738

The ‘International’, the ‘Global’ and the ‘Planetary’: essay help free

What are the key differences between the ‘International’, the ‘Global’ and the ‘Planetary’? Why are these important?

Introduction

The “International”, the “Global”, and the “Planetary” represent the stages of evolution of the discipline of International Relations, which was shaped by theories diverging on “the relationship between agency, process, and social structure” (Wendt, 1992, p422). This essay will attempt to identify, critique and reflect upon the most fundamental differences between the three. I identified their key differences to be: the conceptualisations of the international arena, types of politics, driving motivation behind their politics, types of actors, quality of dynamics, identities and interests, and security. In the subsequent sections, I will critically expose each of these concepts in the framework of the “International”, the “Global”, and the “Planetary”, and speculate on their relevance.

The International Paradigm

The discipline of International Relations, also known as IR, has always been divided by many theories on how to theorise the world, and whether or not elements such as history, philosophy, morals and politics played a significant role in it. The many theories which scatter across IR give to the aforementioned elements different grounds of importance, or no importance at all. In the realm of the discipline of IR, Realism is the Queen theory. The other approaches mostly had the merit of adding elements which the discipline acquired and assimilated through its evolution over the centuries. Nonetheless, the school of thought of Realism is the theory which provided the discipline with its foundation, structure and precise conceptualisation of the actors who rule the discipline of International Relations.

Realism states that the ground stage on which the international actors interact is within a timeless sphere of anarchy, in which States base their relations on Realpolitik, power politics. As a result, the quality of the elapsing exchanges is permeated by overbearingness and selfishness, prevarication of interests of the “stronger” State upon the “weaker” State, distrust, disloyalty, and so on. IR, according to Realism, conceptualises the international arena within Realpolitik, which means applying the original concept of Realpolitik to the international system. Realpolitik is a concept which was coined in Germany in the Nineteenth Century to indicate the pursuit of pragmatic politics, without taking much into account morals and ethics when making a policy decision. Applying the concept of Realpolitik to the international arena meant conceptualising an international system of relations which is more concerned with the pursuit of pragmatic objectives, of selfish politics which would benefit the single State who is pursuing it, rather than the international system. Therefore, the quality of the exchanges happening between States, in turn, did not offer fertile soil for the establishment of international cooperation between States or leagues of States.

It is an international system founded on, and functioning through, power politics. Because of the assumption that Anarchy governs the international system, the only possible actor who is strong enough to survive and interact with the system to pursue and defend its interests, is the State. Therefore, there is only space for State-based politics and the conception of the “International” is an inter-State system exclusively. This is the State-based paradigm of International Relations, and, as the only actors in the international system, they are depicted as rational and autonomous, acting in a static, and consequences-less anarchy. States come into the international arena already equipped with an identity and a set of interests. Therefore, a conception of pre-made identities and interests characterises the international arena, and the only objective of their interactions is power, to gain more and to protect what amount of power one has.

The Origins of The Discipline

To genuinely comprehend the discipline of IR and its mission, one must dive into its origins and subsequent evolution. Different philosophical beliefs and paradigms oppositely approached international relations through conceptualisations of study, politics, dynamics and instruments. IR finds its roots in historical, theoretical, philosophical, political frameworks which, once combined, coined the discipline, whose development culminated with its sudden fall after the end of the Cold War.

To further explain the historical, theoretical, philosophical, political origins of my previous statement I will refer to the work of International Relations scholar, Martin Wight, who was a professor at the London School of Economics.

Wight started studying International Relations when the discipline was gaining momentum and celebrity status in the United States in the 1950s, under the denomination of “A Theory of International Relations”. The scientific or behaviourist movement of the United States developed the belief that if you were to study behaviours attentively enough, one could explain the events that have intersected the faiths of countries in the past, present, and even predict future political intersections between States. This belief gave birth to Modern International Relations, as a rejection of “old” Realist views on the matters of International Politics. Therefore, this wave of Neo-Realism tried to move past the “obsolete methodology of existing general works about International Relations, especially those of Realist writers such as E. H. Carr, George F. Kennan and Hans Morgenthau, which formed the staple academic diet of the time” (Bull, 1976, p 103).

It is imperative to take in mind the element of history if one wants to genuinely understand why the theory of Neo-Realism, which undoubtedly represents an oversimplified framework of the exchanges between States, gained such relevance. This view on International Relations was developed right after the end of World War II, in a post-war world that had lost many things to a conflict which many, if not most, deemed useless, and, above all, evitable. Even the mere idea of a discipline which could avoid the repeating of such events, through the detailed analysis of everyday political events and politicians national and international behaviours, was sufficient justification or motivation for a world that had starved many years for hope.

Wight argued that “it is no accident that international relations have never been the subject of any great theoretical work, that there is “a kind of disharmony between international theory and diplomatic practice, a kind of recalcitrance of international politics to being theorised about” (Bull, 1976, p 114). Therefore, in an effort to alleviate this disharmony, he developed his vision to contribute to the debate, and he based it on the commingling of history, philosophy, morals, and politics. Wight “saw the Theory of International Relations […] as a study in political philosophy or political speculation pursued by way of an examination of the main traditions of thought about International Relations in the past” (Bull, 1976, p 103).

He initially decided to divide it into three main categories, each one representing a great thinker of the past. Later on, he identified a possible fourth category, the Inverted Revolutionists, based on a pacifist current inspired by Christianism, Leo Tolstoy and Mahatma Gandhi.

The three main categories are the Machiavellians, from Niccolò Machiavelli’s ideals; Grotians, from Hugo Grotius’s; and the Kantians’, from the work of Immanuel Kant. Each one of them ideated their interpretations on the conception of human nature, the critical units of analysis of the international system, its dynamics and instruments, and the definition of political space and its characteristics.

Machiavelli theorised that the human nature is only driven by self-interest and permeated with greediness. He identified that the critical unit of analysis for understanding the international arena is the recognition of its state of anarchy, which is dominated by dynamics of warfare, power, security and gathering of resources. The political space, exactly like the human nature, is filled only with self-interest and no morality. Machiavelli provided the base on which Realist theories laid their foundations.

Grotius, unlike Machiavelli and the subsequent most fervent Realists, considers the human a rational being who operates within the State, which subsequently engages in international relations in an international system which, like men, runs on rationality. Everything about Grotius’s theory is permeated with rationality, and the quality of the dynamics of the international arena is a reflection of such. Indeed, he believed that the dynamics of the international arena are ruled by diplomacy, negotiations, institutions and norms because the political space is dominated by institutions and by order. Grotius represents moderation and the voice of reason for the successful establishment of an international system based on rationality and cooperation, not on violence and distrust like the system painted by Machiavelli in his most famous works, The Prince and Discourses in the First Decade of Titus Livy. Grotius speculates on the doctrine of an international system based on a society of states working together towards common goals, he dreams of an international society, in direct opposition with the sharp, realist concept of national societies above all. It is “the idea […] that international politics is not just a matter of relations between states, but also a matter of so-called “transnational” relations among the individuals and groups that states compose” (Bull, 1976, p112).

Kant theories that the human nature is good, peaceful, a supporter of solidarity and cooperation. He believed in a global community and in an ideal man who would contribute to its flourishment. The dynamics revolve around policies for cooperation, international trade and exchange, and they would eventually enable the development of a political space represented by a cosmopolitan society. In one of his works, “Perpetual Peace: A Philosophical Sketch”, he goes as far as developing a plan for governments of the world to follow to establish peace for the world, because peace has to come above all. He dreams of a political space which is characterised by the emancipation from State and by an international confederation and the disruption of geographical boundaries.

Moreover, Wight believed that “the truth about international politics had to be sought not in any one of these patterns of thought but in the debate among them” (Bull, 1976, p 110). Machiavelli’s theory is also most notoriously known as Realism. Grotius represents Rationalism, and Kant is Revolutionism, and each one is a founding brick of the discipline of International Relations. Each theory went through transformative changes over the decades. For example, Machiavelli’s Realism turned into Neo-Realism in the 1970s and 1980s and turned into Modernist or Positivist in the 1990s. Grotius’s Rationalism found a new key of interpretation in the English School, between the 1940s and 1960s. Kant’s Revolutionism went from Idealism in the 1920s – 1930s to Neo-Liberalism and Neo-Institutionalism in 1970s and 1980s .

The State-Based Approach to Security

I end this excursus on the origins of the discipline of international relations here, and I catch this occasion to list another key aspect which sets the International paradigm apart from the global and the planetary. And it is its State-based approach to security, which meant that only the State was in charge of taking the necessary measurements aimed at safety, preservation and survival of a country. This implied a severe limit at the protection of one’s nation because “weak” nations did not possess a force of power tantamount to the one of a “strong” country. Therefore, a “weak” state was in the thrall of the anarchic international system, and if a nation wanted to exercise its power to gain more resources, it could attack it, without an international community which would defend it or even dissuade a predator country from invading it.

Methodological Nationalism

The theory of Methodological Nationalism offered a base to understand and organise the life and cycles of IR. It provided the discipline with a source of legitimacy. “Methodological Nationalism […] equates societies with nation-state societies, and sees states and their governments as the cornerstones of social-scientific analysis. It assumes that humanity is naturally divided into a limited number of nations, which internally organise themselves as nation-states and externally set boundaries to distinguish themselves from other nation-states” (Beck, 2003, p 453). It’s the perfect theory to justify the Realist approach to IR. It combined realist accents with the belief that there is only one-world World, everything else merely points at other ways to look at the same one-world world. Methodological nationalism is founded on six core beliefs: a plurality of Societies; Societies are subordinated to the Nation-State; States run on territorial “State-constructed” boundaries (Beck, 2003, p454); State and Society both determinate themselves through a circular belief: the nation is creator and protector of Society’s plethora of rights, and the individuals of the Society organize themselves in movements to influence the actions of the Nation-State, which, in turn, also legitimises the State again; “both States and Societies are located within the dichotomy of national and international” (Beck, 2003, p454); the state is the provider of social order and provides the scholars of multiple disciplines with the data about the country necessary to them. (Beck, 2003). Moreover, the core elements of this theory engage in constant activities which result in continuous determination and further legitimisation of one and the other, without leaving any room for the introduction of new elements.

The Decline of legitimacy of the Discipline

The international paradigm of the discipline experienced a major setback after the end of the Cold War, which left everyone baffled. Also, it left everyone unprepared for the events which followed it. The discipline of IR lost credibility as a predictor of events because all of the academics and IR contemporary theorists indeed failed at predicting one of the most history-shaping events of last century. Moreover, even the assumptions of State-based politics faltered, especially after the terrorist attack on the Twin Towers of September 11th, 2001. The discipline was built upon the framework of conceiving the world from the point of view of a State, indeed a State-based approach, preferably a white, Western one.

The end of the Cold War put into question the sounding of the reasoning behind the discipline, and the terrorist attack struck the final hit. It forced scholars, and politicians alike, to recognise the existence of worlds beyond the “Western World”. I am not here to speculate on whether or not the terrorist attack would have happened if the united states had not imposed their ways in the Middle East for years on end. But I am here to point out that the terrorist attack lifted the veil and it made it impossible for the discipline to ever go back to its original frameworks.

I believe that, at this Realist stage, the discipline used to run on such limited territorial lines that its failure had always been around the corner. Proof of this is that it only took a major event such as the abrupt end of the Cold War for it to crumble and for the scholars to put its frameworks into question altogether. Nonetheless, I do believe that the ultimate cause for the failure of the discipline has to be found on the terrorist attack, which slashed the core beliefs of many, terrorised an entire world, and scarred a whole generation forever. The fear that came afterwards paralysed the world for a moment. Somehow, the introduction of an undefinable enemy turned the discipline of IR on its head. State-based politics, as flawed as they were, were easily graspable and easily manageable: they played alongside the rules of an accepted framework. But 9/11? What was that? Under what category would it fall in the limited structures “allowed” by the discipline? A terrorist attack from an undefinable actor was inconceivable; it did not fit even in the corner of the discipline. The discipline was lacking something. The discipline was wrong.

A Critique of IR, and The Rise of the Global

Eventually, what were the core beliefs of the discipline of International Relations, turned out to be their very own self-limiting beliefs which did not allow the discipline to evolve with time into a comprehensive and complete discipline which provides an adequate understanding of the world and the exchanges which elapse within it.

“The fact that worlds of power politics are socially constructed […] does not guarantee they are malleable”, which is one of the flaws which contributed to the fall of IR, and “through practice agents are continuously producing and reproducing (the same) identities and interests. (Wendt, 1992, p411).

It’s a discipline based on seeing, experiences and conceptualising the world from a State-based only perspective, which results into the provision of a partial and meagre framework of understanding of the exchanges occurring in the world. It is a very limited perspective which lent a voice only to the white, Western Elite.

At the break of the new century, people started looking for other outlooks to replace entirely the failure that the “International” represented. In this time of international (relations) crisis, talks about what will come to be known as the ‘Global’ sprung everywhere. They will bring into the conversation a mixture of concepts which classical IR theories did not cover: Constructivism, gender issues, the need for Morals, Critical Discourses, Post-colonialism, and subsequently, the rise of the Anthropocene.

The Main Paradigms of the “Global”: Social Constructivism and Critical Thinking

“The cosmopolitan perspective dismisses the either-or principle of realism: either the State exists, albeit only as an essential core, or it does not exist at all; either there is national sovereignty – a zero-sum game between national and international competence – or there is no sovereignty at all. From a cosmopolitan perspective, “Realism” is a kind of political irrealism because it neglects the possibility and reality of a second “Great Transformation” of the global power game” (Beck, 2003, p457).

These words from Ulrich Beck, one of the most significant theorists of the Global, summarise the stark contraposition between the “International” and the “Global”. In the 1990s, debates about the Global came in to dismantle the limiting beliefs at the core of the discipline, which flawed for its lack of inclusiveness and overbearingness. The arrival of the global represented the fall of the exclusive domain of binary Left and Right politics as well.

The opening of the discipline of IR to a Global Era brings a new opportunity to engage with the world from a non-State-based position, through the development of many, pluriversal approaches: Constructivist, Critical and Cosmopolitan approaches. The Global Era represents a commingling of these new approaches, and the final result portrays how “the cosmopolitan perspective opens up negotiation spaces and strategies which the national viewpoint precludes. […] The negotiation space the cosmopolitan viewpoint opens up contradicts the absence of alternatives.” (Beck,2003, p466).

They introduce new ways of seeing and thinking about politics, global interests, and global concerns. The Global aims at creating a universal vision to build a liberal and global community, in which the States are not the centre of IR speculation anymore. The international arena is also positively shaken up by the appearance of new international actors which are not states: it’s the rise of the global civil society and Non-Governmental Agencies (Kaldor, 2003). Moreover, through the development of new international dynamics, Nation-States become the product, not the subjects, of the international arena (Jackson, 1990; Krasner, 1999).

Two paradigms of thought especially shaped this evolutionary period of IR: the social constructive/liberal and the critical/deconstructive.

The maintenance of the inter-State system, alongside the rise in popularity of theories of Global sovereignty, characterise the new international order. The emergence of the Global requires a new understanding of the mechanisms of the international arena because “It is the collective meanings that constitute the structures which organise our actions” (Wendt, 1992, p397). Therefore, with the evolutionary passage towards a new stage of the discipline of IR, scholars and citizens alike require a new framework of concepts. Social Constructivism achieves just that by providing a sociological understanding of interactions which challenge previous IR thinking. Alexander Wendt, a leading Constructivist theorist, explains how Constructivists believe that the international system and its frameworks are socials constructs, not beliefs that should be taken as “a given” (Wendt, 1992). Therefore, the significance of everything regarding IR comes “out of interaction” (Wendt, 1992, p403). According to Realism, States come into the international arena already equipped with an identity and a set of interests. Alexander Wendt, in direct opposition to Realism, speculates on “how knowledgeable practices constitute subjects”, and how Constructivism can contribute to “identity and interest formation” (Wendt, 1992, p394). They elaborate on the idea that the creation of an identity and set of interests happen through the elapsing exchanges between States in the international arena. “Actors acquire identities by participating in such collective meanings” (Wendt, 1992, p397). Therefore, actors do not enter the international arena with pre-formed identities, but they create one during their contacts with other states. And the same goes for their interests, which are formed while experiences these exchanges. Different exchanges will show how countries can have a variegated set of interests, depending on the circumstances (Wendt, 1992). Therefore, identities and interests are not a given, and their establishment happens during the socialisation process.

The paradigm of the Critical thinking supported the rise of the global because it opened the door for the emancipation of security from the state-based approach. (Booth, 1991). Critical theorists, alongside Constructivists, discuss the treatment of human security and how the obstacles to human security are constructed. The particular interests of States are a barrier to a universalist liberal approach to global rights and justice. Instead, supporters of Foucauldian critics see the rise of the global as a negative shift. The pursuit of global liberal “governmentality” and “biopolitics” are a negative aspiration for the security of States, which will end up having to rely on the international community. “The undermining of the politics of state-based representation and the globalisation of regulatory power has become the starting assumption for the postructuralist “scaling up” of Foucault in critiques of global governmentality” (Chandler, 2009, p536). Lemke (2001) shows how Foucault used concepts regarding governmentality in a way closer to realism than constructivism, which indicates a critique of the doctrine and its “obsession” with subjectification.

Another fundamental difference of the Global, in opposition with the International, is highlighted attention on Morals and Ethics, which have to have a more profound impact on the decisions of the Nation-State. Indeed, the rise of humanitarian aid actions and acts of global cooperation are proof of that. The global perspective introduces a new critical theory of social inequalities which shines a light on the need to provide aid to nations, minorities or whoever is in considerable need (Beck, 2003). Beck (2003) critics how the original IR used methodological nationalism to remove from its agenda the tackling of global inequalities. “Thus, the bigger blind spots – and sources of error- of methodological nationalism linked to research on inequality will only be recognisable by means of a systemic switch from the national to the cosmopolitan perspective. It is only within the framework of such a new critical theory of social inequality that the fundamental asymmetry of inequality perception […] can be unravelled” (Beck, 2003, p459). Nonetheless, he highlights also how the shift to a global perspective is still not enough to put a real fight against inequalities. Until “there is no global jurisdiction and reporting institution to survey global inequalities, these will remain disintegrated into a motley pattern of national-state inequalities” (Beck, 2003, p461).

International cooperation also brings a Decentring of State-based approaches to security, which in turn, produces more equilibrium and guarantees more safety to “weak” nations. The global brings about the creation of a “cooperative security system, in which states identify positively with one another so that the security of each is perceived as the responsibility of all” (Wendt, 1992, p400).

A Critique of Methodological Nationalism

Globalisation theorists deeply criticise Methodological Nationalism, a stream of thought which profoundly shaped the direction of the original discipline of IR, through the definition of its narrow frameworks. Ulrich Beck, one of the most famous theorists of the cosmopolitan approach, offers a brilliant critique of it. Methodological nationalism only manages to produce a continuous cycle of self-limiting beliefs which do not allow room for adaptation to new contemporary challenges. It is tiring and continuous contraposition between them or us, north or south, weak or strong. Its concepts are not appropriate anymore in the rise of the global age. He calls for a “paradigmatic reconstruction and redefinition of social science from a national to a cosmopolitan perspective [which] can be understood as […] a broadening of horizons for social science research” (Beck, 2003, p456). “Social science must be re-established as a transnational science of the reality of denationalisation, transnationalisation, and “re-ethnification” in a global age – and this is on the levels of concepts. Theories, and methodologies as well as organizationally. This entails a re-examination of the fundamental concepts of “modern society” (Beck, 2003, p458).

The cosmopolitan age requires a redefinition of the understanding of sovereignty in both the national and international context (Beck, 2003). Therefore, he states that “traditional conceptualisations of terms and the construction of borders between the “national” and the “international”, domestic and foreign politics, or Society and the State are less and less appropriate to tackling the challenges linked to the global age” (Beck, 2003, p456). Therefore, the main focus on the debate of globalisation has to be “on gaining a new cosmopolitan perspective on the global power field, pushing new actors and actors’ networks, power potentials, strategies. And forms of organisation of debonded politics into the field of vision” (Beck, 2003, p 467). Nonetheless, Beck (2003), stresses the importance of not mistaking the critique of this theory for the end of the nation-state theory: nation-state will always exist or will evolve into a new concept close to a possible transnational states theory.

The new Era of Planetary Politics of the Anthropocene

From the 2010s, discussions about a new concept called Anthropocene replaced the Global, which had declined in popularity because the translation of the global theories into reality did not appear to focus on achieving global forms of liberal governments anymore, nor did its original aim seemed to carry a positive connotation anymore. Furthermore, “the lack of strategic engagement […] (was) fundamental to the appeal of the Global Ideology” (Chandler, 2009, p540). Therefore, the rise of depreciative theories of the global made the world of scholars look for another direction. The crisis of the global did not produce a return to the past of IR, but rather a perspective of the problematics of IR.

The rise of the Anthropocene is strictly connected with the development of theories on pluriversalism, multiple universes. Blaney and Tickner (2017) discuss how an ontological turn of IR could exorcise “singular world logics introduced by colonial modernity” and allow the discipline to interact with the conception of pluriversalism. By studying on various sources, they develop “the potentials of a politics of ontology for unmaking the colonial universe, cultivating the pluriverse, and crafting a de-colonial science.” (Blaney and Tickner, 2017, p293). They suggest the presence of alternative world realities, which could produce “multiple and hybrid “reals”” (Blaney and Tickner, 2017, p295).

Both Global and Planetary do not see the world in terms of State-based theories of strategy and interests. Therefore, there is no intern-national theory. The predominant discussions of these two theories are about the how we understand and see the world beyond the strict assumptions of the discipline of IR.

Bruno Latour (1993) goes as far as to say that the modern society is stuck in “great divides”, mainly in the frameworks of nature/culture, human/non-human, facts/values, mind/body. These separations allow Western society “to claim to represent a singular reality in a unified science untainted by political interest, power or culture. […] Nature and culture are not discrete categories but intertwined in a multiplicity of hybrid assemblages […] modernity’s particular mode of representing reality is not universally shared. […] many communities do not sharply distinguish humans and other entities, so that animals, plants, and spirits are as much “people” (with consciousness, culture and language) as “we” are” (Blaney and Ticker, 2017, p296). The western societies start paying attention the profoundly original ways of seeing the world, which come from cultures that they had ignored. From them they take new eyes to look at the world: they discern how human activities never separated themselves from the earth’s ecosystem. It is the explosion of a real global conscience with the birth a planetary community, who is aware of the environment and the consequences of humanity’s actions on the planet. This also brought the awareness of agencies that had always been ignored by western society in the international arena: the equal presence of Human and non-human actors; nature declined in many agencies: water, air, etc., and cosmos.

The Planetary is aware that humankind with its actions has changed the planet we live in: the ecosystem, flora and fauna. And the planetary politics have started addressing these problems and, thanks to the rise of a planetary sense of community, governments of many countries have started doing something about it.

The era of the Anthropocene in IR is still relatively new; therefore, there is not as much debate about is as there has been for the International and the Global, but it is visible how planetary seems to have taken away the biggest concern of original IR, which is the gain of power. The race for power which has theorised the first conception of the discipline has come very close to destroying our world and the most recent “update” on IR now works on how to “fix” the unfixable. Planetary politics can be regarded as the least optimistic era of our history, and the only stream of thought which has offered a possible way to save our planet comes from the cultures the western society had tried to crash and integrate within it for centuries.

Conclusion

I will conclude by stating that identifying the key differences between the international, global and planetary is crucial because they show the development of the frameworks within which humankind moved and evolved, over the centuries, the conflicts and the scientific developments. Therefore, these differences provide the world with a reflection of the changing times and the world’s rejection of a one-world World, hegemony and lack of representation of multiple identities, interests, and beliefs. We have only recently entered the phase of planetary politics, although 8 years in IR provide a discreet amount of material; therefore, it is too soon to speculate on whether or not the attempts operated by the Anthropocene of preserving our planet will be fruitful, but it is definitely an improvement from the “International” paradigm of IR.

Bibliography

Beck, U. (2003). Toward a New Critical Theory with a Cosmopolitan Intent. Constellations, 10 (4), 453-468. Available from https://doi.org/10.1046/j.1351-0487.2003.00347.x [Accessed 3 February 2018].
Blaney, D. L. and Tickner, A. B. (2017). Worlding, Ontological Politics and the Possibility of a Decolonial IR, Millennium, 45 (3), 293-311. Available from http://journals.sagepub.com/doi/abs/10.1177/0305829817702446
[Accessed 15 March 2018].
Booth, K. (1991). Security and Emancipation, Review of International Studies, 17 (4), 313-327. Available from https://doi.org/10.1017/S0260210500112033 [Accessed 8 April 2018].
Bull, H. (1976). Martin Wight and the Theory of International Relations: The Second Martin Wight Memorial Lecture. British Journal of International Studies, 2 (2), 101-116. Available from https://www.cambridge.org/core/journals/review-of-international-studies/article/martin-wight-and-the-theory-of-international-relations-the-second-martin-wight-memorial-lecture/9F12B04B0143159BE8D44C6BCABE7FF8 [Accessed 26 January 2018].
Chandler, D. (2009). The Global Ideology: Rethinking the Politics of the “Global Turn” in IR, International Relations, 23 (4), 530-547. Available from DOI: 10.1177/0047117809350989 [Accessed 6 April 2018].
Galtung, J. (1969). Violence, Peace and Peace Research, Journal of Peace Research, 6 (3), 167-191. Available from http://www.jstor.org/stable/422690
[Accessed 18 February 2018].
Jackson, R. H. (1991). Quasi-States: Sovereignty, International Relations and the Third World, 1st ed. Cambridge: Cambridge University Press.
Kaldor, M. (2003). Global Civil Society: An Answer to War, 1st ed. London: Polity.
Krasner, S. D. (1999). Sovereignty: Organized Hypocrisy, 1st ed. Princeton: Princeton University Press.
Latour, B. (2004). Whose Cosmos, Which Cosmopolitics?, Comments on the Peace Terms of Ulrich Beck, Common Knowledge , 10 (3), 450-462. Available from http://www.bruno-latour.fr/sites/default/files/92-BECK_GB.pdf [Accessed 8 April 2018].
Lemke, T. (2001). The Birth of Bio-Politics: Michel Foucault’s Lecture at the Collège de France on neo-liberal governmentality. Economy and Society, 30 (2), 190-207. Available from https://doi.org/10.1080/03085140120042271 [Accessed 5 March 2018].
Morghenthau, H. J. (1948). Politics Among Nations: the Struggle for Power and Peace, 7th ed. New York City: McGraw-Hill.
North, D. C. (1991). Institutions, The Journal of Economic Perspectives, 5 (1), 97-112. Available from https://www.aeaweb.org/articles?id=10.1257/jep.5.1.97 [Accessed 11 March 2018].
Peltonen, H. (2017). Planet Politics and International Relations. Duck of Minerva, 12 November. Available from http://duckofminerva.com/2017/11/planet-politics-and-international-relations.html [Accessed 8 April 2018].
Rothe, D. (2017). Global Security in a Posthuman Age? IR and the Anthropocene Challenge. E-International Relations, 13 October. Available from http://www.e-ir.info/2017/10/13/global-security-in-a-posthuman-age-ir-and-the-anthropocene-challenge/ [Accessed 8 April 2018].
Torrent, I. (2018). Week 2: What was the ‘international’ paradigm of IR? Notes taken in class. Beyond International Relations: The Politics of the International, the Global and the Planetary. Notes taken in class 30 January 2018.
Wendt, A. (1992). Anarchy is what States Make of It, International Organization, 46 (2), 394-419. Available from http://www.jstor.org/stable/2706858 [Accessed 13 February 2018].
Wight, M. (1991). International Theory: The Three Traditions, 1st ed. London: Leicester University Press, a division of Pinter Publishers.

2018-4-12-1523522615

Organisational Behaviour: An Analysis Of A Team-Based Approach To Working In The Case Of Phil Jones

Abstract

The aim of this essay is to discuss at length and critically evaluate group and team development and behavioural theories in practice, with reference to the case study concerning Phil Jones and his Gulf Project Team, within Engineering Co, evaluating if a team based approach to work is effective within organisations. It firstly establishes to what extent Phil Jones’ analysis of his group’s current situation is accurate, referring Tuckman and Jensen’s stages of group development in evaluating this. Then it discusses the possible interventions that could be made by Phil to allow his team to get back on track, and reach the performing stage of team development. It is then noted that a possible intervention that could assist the team in reaching this stage is to become a virtual team. The potential issues facing virtual teams are then evaluated, and are contrasted with the issues faced by Phil Jones’ team, with possible solutions offered to issues facing such virtual teams and virtual team leaders, allowing them to reach the performing stage. Finally this essay critically analyses the strengths and weaknesses of a team-based approach to work as a whole, drawing from Phils Jones’ case, a range of literature, and anecdotal experience to conclude that the use of a team-based approach to work can be an effective way of working, through the use of strong e-leadership skills and technology to manage teams virtually.

Introduction

In contemporary society a team-based approach to working is becoming evermore common (Callanhan, 2004), and has become prominent among project teams in the engineering industry (Schaffer et al, 2012). Hence it is unsurprising that a project team; a group of individuals whom come together for an individual task, disbanding after its conclusion (Poel, Stoker and Van der Zee, 2014), is used in Phil Jones’ case for the Gulf Project within Engineering Co. Despite the high popularity of a team-based approach to project work, it is debatable if such approaches are the most efficient way of working, due to the myriad of issues which can arise amongst a team due to poor leadership, leading to them struggling to perform. However when teams succeed the benefits of a team-based approach to project work are reaped (Terry, 1999). Hence through an analysis of group and team development, discussion of interventions made to aid team development, and an evaluation of the strengths and weaknesses of a team-based approach, with reference to Phil Jones’ case, it can be established if a team-based approach to project work is effective within organisations in modern society.

Tuckman and Jensen’s Stages of Group Development in the Case of Phil Jones

Initially Phil Jones lacked the training to deal with people issues amongst the group and lead his team. Initially it must be noted that teams and groups are defined differently. A group consists of a number of individuals all of whom accomplish their tasks independently, which have a similar purpose (Gilley and Kerno Jr, 2010). Smith (1967) also gives the description of a group saying that it is two or more individuals who collaborate, share common objectives and norms and have a communal identity. Although different researchers, both give a similar description of a group in that individuals still have common goals. The definition of a team is very similar to a group, however; a group may not be a team but a team may be a group. Hence these terms cannot be used interchangeably. Baldwin et al (2008) defined a team as a group of individuals who have a great amount intercommunication and interdependence, sharing equal responsibility in their appointed objective. The clear difference between a group and a team is therefore the higher level of interdependence and equal responsibility a team has in achieving their objective.

To remedy his teams’ issues, to make them stop working as a group and start working as a team, Phil read about the stages of group development (Tuckman 1965). Tuckman and Jensen (1977), defined five group development stages, the initial two of which are; forming (Tuckman 1965); when team members get to know each other; unlikely to disagree with their teammates to avoid conflict at an early stage, and storming; defined by Bonebright (2009), as involving disagreements; with frictions in the group as the individual roles and tasks of team members can be unclear, leading to work moving slower than anticipated and team tensions. Phil concluded his group was stuck at the storming stage, and struggled to see how to resolve conflict and reach stages three, four and five defined by Tuckman and Jensen (1977) as; norming; where group members understand their roles and goals, feeling belonging among the team; storming; where the group works effectively as one, building on each others strengths and weaknesses, and finally adjourning; where the group completes their project, evaluates and disbands.

Using Tuckman and Jensen’s 1977 stages of group development conclusively Phil’s diagnosis of the situation is correct, as there are similarities between the storming stage of group development and Phil’s teams position. The case study exemplifies the transition of the team from the forming stage to the storming stage. Phil generated competition within the team, as in his opinion a team needs disagreements to achieve creative innovative ideas. Phil’s point of view is that teams need some debate, as this is what happens among teams in the storming stage, in order to reach the norming stage. Instead of this, the team ended up with more issues than accomplishments, getting stuck in the storming stage, resulting in Phil having to deal with more disputes between team members than project developments. This is common; according to Gersick (1988) many teams end up being stuck in the storming stage, never moving onto the performing stage due to poor management of disputes by leaders like Phil. Hence the project is falling behind due to the lack of clarity of instruction regarding team members roles from Phil as a leader, leading to multiple members completing the same work, resulting in a waste of capital and time. Fapohunda (2013) claims clarity is one of the main elements that concerns team members at storming stage, stating it is often the cause of all disputes regarding roles within in the team. This suggests that due to poor leadership from Phil through misguided attempts to bring the group together through conflict, to gain a sense of belonging as found in the norming stage, interventions are needed to overcome mistakes made by Phil to get out of the storming stage.

Hackman’s Team Leadership Mistakes In the Case Of Phil Jones

Hackman’s work (1998) is used to show the common mistakes made with teams, all of which are a common feature in the Gulf Project Team at Engineering Co. One of the obvious mistakes defined by Hackman (1998) and displayed by Phil Jones is attempting to build a team by managing them as individuals, encouraging members to lack communication with each other, hampering the norming stages characteristic of team spirit. In Phil’s team this is difficult to avoid, as the physical distance of the members placed in different locations hampers any attempt from Phil to motivate members not only communicate with him; the only member to have physically met everyone, but to communicate with each other to gain a sense of team belonging. This leads to another mistake featured in Hackman’s 1998 work, exemplified by The Gulf Project Team; a lack of agreement regarding roles, authority, and boundaries for all team members. This issue is also difficult to avoid within Phil’s team, as it is harder for the team to agree on limitations, delegation and boundaries if they can’t physically meet and work things out, suggesting that distance has again hampered the teams communication and sense of belonging. This exemplifies a further mistake made on Phil’s part featured in Hackman’s 1998 work; a clear lack of planning and execution of tasks. To resolve this Phil must show organisational skill, delegating work effectively, to stop time being wasted through duplicated work, fracturing the teams’ sense of belonging further.

The final mistake shown by Phil Jones featured in Hackman’s 1998 work, is assuming all the members of the ‘team’ have the necessary skills to work together, despite being a diverse group from multiple cultural backgrounds, who are unknown to each other. Phil shows poor leadership regarding his cultural awareness surrounding his authority and responsibility in decision-making, and is naïve, being “sure everything would somehow have fallen into place as at first people appeared to be committed to the project and the team”. The forming stage is crucial to team development. By distancing himself from this stage, encouraging team conflict over team belonging despite members’ diversity in the teams’ early stages, he has created a fractured team. He must rectify this; as workplace diversity is becoming increasingly important in society (Parham and Muller, 2008). Phil must not see this as an issue to progress, and accept today’s workforce is diversified. Instead of taking a Laissez-Faire approach, he must look to use this as an advantage, working to integrate cultures to produce the end result.

Phil is correct that his team is still in the storming stage of Tuckman’s stages development; hence he must address such mistakes. Phil must show leadership in the initial stages of getting the team back on track ensuring that until the team is norming it does not control itself, accepting delegation and clarity of roles and working practice. Once the team has a better understanding of each other he can allow them more freedom, as Matsudaira (2016) states “being a good leader means allowing the people around you to be experts in their domains”. Hence through understanding his team members and delegating efficiently Phil can get the best out of everyone, by drawing on motivation theories using social identity to get the best out of the team, giving each member a task suited to their skills they can be proud of. Lewis (2011) states social identity “refers to the desire of individuals to strive to maintain some perceived distinctiveness”. Hence if through interventions all dispersed group members can be motivated to take pride in the work through motivational leadership and a feeling of belonging through their role in the team, there will be no duplication of work and less conflict. Hence a key intervention Phil Jones could use to remedy all such issues and allow team to perform, and hence work as a team efficiently is the use of a virtual team.

The Use Of Virtual Project Teams To Reach The Performing Group Development Stage

A virtual team is defined as “a group of people who interact through interdependent tasks guided by common purpose and work across space, time, and organizational boundaries with links strengthened by information, communication, and transport technologies” (Gassman and Von Zedtwitz, 2003 p.244). Hence though leading a team in person is difficult (Lilian 2014), virtual project team leaders face greater issues. Kayworth and Leidner (2002) found virtual project teams face similar issues to traditional teams, more strongly in virtual settings, coupled with challenges linked to dispersion of members, high reliance on technology and strong communication. Consequently specific leadership strategies are needed. The strategy utilized by managers of virtual teams is e-leadership, defined as “a social influence process, mediated by advanced information technologies, to produce a change in attitudes, feelings, thinking, behaviour and/or performance with individuals, groups or organisations” (Avolio, Kahai and Dodge, 2001 p.617). Hence e-leaders utilize technology to resolve virtual team issues by influencing team behaviour, as the goals of leadership; motivation, vision, determination and innovation (Spicker, 2012) are unchanged, however the mediums implemented to resolve issues are vastly different in virtual project teams.

The initial issue e-leaders face when managing a virtual project team is distance. Distance in a virtual team is established by geography, time zone, and familiarity among team members. In Phil’s case, geography and time zone impeded the team’s success, as though cultural differences were the reason why Phil was the key communicator in the team, to some extent the issue of coordinating an appropriate time for group communication due to differing time zones hampered simultaneous work, proving detrimental in motivating the team to communicate with each other individually. Studies show this assumption. Cummings (2011) found differing work hours caused by time zones burdens team members and leaders. Such levels of dispersion of team members as in Phil’s case can hinder team members familiarity with each other, as he is the only person on the team to have communicated with all team members, reducing social familiarity, which is important to how teams operate (Zaccaro and Bader, 2002). To remedy this e-leaders can address distance by responding quickly to distance specific issues regarding deadlines, then finding a good time to use virtual meeting software, enhancing feelings of closeness through diverse technologies, achieving team performance and greater organisational values. Hence in Phil’s case making the team virtual would be positive in this aspect, as the use of technology would aid the team’s success as schedules and deadlines could easily be accessed by all. Furthermore greater feeling of closeness among team members through the use of virtual meetings could be made, allowing team members to contact each other directly, rather than through Phil.

Though physical distance can be remedied through this, cultural diversity regarding national culture, and values caused by dispersion requires other strategies to be taken. Diversity can be problematic as like in Phil’s case cultural expectations regarding work ethic, work execution and job roles can vary regionally (Burnelle, 2012), causing friction, misunderstandings and fractured communication in the team, with further difficulties when there is a language barrier. E-leaders can solve issues related to cultural diversity by designing team-building sessions through technological mediums to ensure team members understand each other’s cultural differences. They can also address ambiguous online communications, ensuring no misunderstandings. Furthermore promoting a sense of belonging in a virtual team keeps members engaged and stops feelings of isolation from the rest of the team, reducing in and out groups (Leonard, 2011). Therefore through accommodating diversity through teambuilding and technology Phil would be best making his project team virtual as it reduces cultural frictions, and improves members sense of belonging.

Though diversity can cause communication errors within a virtual team, such errors can also be caused by technological breakdowns and, as in Phil’s case; a lack of clarity given by leaders regarding the roles and behavioural expectations of team members, leading to work being completed incorrectly. Hence if the qualities of effective communication; “quantity, frequency and accuracy of information exchange” (Gallenkamp et al 2011, p.8) are unfulfilled, communication breakdowns occur, causing frictions and hampering the team’s success as in Phil’s team. Communication is difficult in a virtual team, as face-to-face communication is omitted from most communicative mediums, potentially deterring emphasis on certain points. The lack of face-to-face contact may cause interactions to lose social or contextual information (Purvanova and Bono, 2009), such as a member’s higher professional status, or higher level of expertise on a subject. Hence to resolve such issues, e-leaders must ensure it becomes habit to team members to maintain continuous contact with each other, and analysing communications to ensure clarity is given regarding roles and expectations. Video-chat technologies can mediate this issue. Hence by making his team virtual Phil could resolve his teams and his own communicative issues.

Hence by improving communication e-leaders create social belonging within a virtual team, eventually creating trust. Trust is important within virtual teams as it motivates individual members to fulfil their role, building dependability (Uber Grosse, 2002). If trust is not achieved conflicts and low group satisfaction occur, deterring the team’s chance of success, as in Phil’s case. E-leaders can create trust through video-chats and electronic meeting systems, promoting communication, joint-efforts and a shared understanding of team issues. Hence through motivating his team to communicate effectively and hence building trust through technological mediums, over-coming distance and diversity, Phil could bring his team to performing stage as a virtual team, by becoming an e-leader. Therefore the use of project teams can be effective within modern society, should a virtual team be used due to recent technological innovations.

Strengths and Weaknesses Of A Team Based Approach to Work

However even when using a virtual team there are strengths and weaknesses of a team based approach to work, within a group of individuals. Some may say a team-based approach to work is far more effective than accomplishing a complex task individually. This is because several people can divide the work up, decreasing individual workload and providing many different ideas to cope with the complexity of a task. Wageman (1997) stated several viewpoints are more suitable when the task is complicated. This is also supported by Klein (2005) stating multiple people are required to carry out a task if the workload is extravagant. Working in a team on a complex task also increases levels of creativity when completing a task. Amabile, et al. (1996) states teamwork increases creativity, as members all have different and diverse backgrounds, combined with the fact that members’ ideas are challenged by others within the team to reach common goals. Furthermore Moreland (2006) explains that working in a team will increase the ability of members’ to specifically remember and recall important project information to reach common goals. This is because members of a group are aware of each other and remember different pieces of information better than other members would. If a member forgets a piece of information another may remember and be able to recall it due to goal interdependence. The degree of goal interdependence will have a significant impact on all members of a team. If there is a high level of objective interdependence, this will enhance team members’ execution of current tasks (Emans, et al., 2001). The authors believe that a high degree of goal interdependence promotes cooperation amongst group members, hence improving performance when carrying out projects. Furthermore Emans, et al. (2001) states that this greater execution of interdependent tasks is positively correlated to group members job satisfaction.

However using a team is not always the most efficient method to complete a task, as each member of the team has a different perspective. Therefore, in team discussions, each team member will have different perceptions, which makes decision time-consuming. Hinsz, et al. (2003) noted how teams are very contemplative in operation; hence their time to make a decision is very slow; whereas an individual’s decision-making process is much faster. Cognitive thinking is also impeded due to the way people communicate in teams (Cooke, et al., 2013). Diehl and Stroebe (1987) elaborated saying that communication of ideas and knowledge interrupts cognitive thinking by preventing other team members from creating ideas. This is so as one person in a team talks at a time, hence planting their idea first and mitigating others thoughts. Members also suffer from being able to challenge a group decision once it is already being carried out. Hence even if the decision is working out poorly, group individuals will generally fail at proposing alternative strategies. Hinsz (2015) stated that teams cause members to lose their own self-awareness and even if a member has knowledge that a team decision is incorrect, or working ineffectively they will not query it.

One of the most substantial disadvantages of working in a team is social loafing. Usually one member of a large team tends to exert much less effort than the rest of the team. This not only causes frustration in other members, but also reduces the quality of the project. Harkins, et al. (1979) found that in larger groups the average performance of each person decreased, with the explanation that some individuals felt like they could slack whilst remaining undetected using the group. However now more than ever it is difficult for individuals not to be called out on ‘social loafing’ in a group, if a project team is managed effectively through technology. With the innovation of cloud based constantly editable software such as Google Docs, e-leaders such as Phil Jones can continuously check on the pace of work uploaded by his team members, ensuring work is completed accurately, creativley and at an appropriate pace to ensure deadlines are met, and furthermore giving such leaders the ability to know which individual team members are doing the majority of the work, allowing social loafers to be pulled up through virtual devices.

Conclusions

Conclusively a team-based approach to work though popular, can be inefficient if team leaders fail to assert their authority and leadership skills in early group formation ad storming stages, hindering their team from reaching the performing group development stage as defined by Tuckman and Jensen’s stages of group development. However should interventions be put in place such as a virtual team, teams can overcome the variety of social and communicative challenges that can face a failing dispersed team as defined by Hackman’s work, and as exemplified in The Gulf Project Team. Hence virtual teams can allow teams to perform effectively, with a review of literature concluding the use of a team-based approach to project work is effective within organisations in modern society, due to recent technological advances.

References

Amabile, M. T, Conti, R, Coon, H, Lazenby, J, & Herron, M. (1996) “Assessing the work environment for creativity” Academy of Management Journal, 39(5), pp.1154-1184.
Avolio, B. J, Kahai, S, & Dodge, G. E. (2001) “E-Leadership: Implications For Theory, Research and Practice” Leadership Quarterly, 11(4), pp.616-688
Baldwin, T. T, Bommer, W. H., & Rubin, R. S. (2008) Developing management skills. New York: McGraw-Hill Irwin.
Bonebright, D. A. (2010) “40 years of storming: a historical review of Tuckman’s model of small group development” Human Resource Development International, 13(1), pp.111-120.
Bragg, T. (1999) “Turn Around An Ineffective Team” IIE Solutions, 31(5), pp.49-51
Burnelle, E. (2012) “Virtuality In Work Arrangements and Affective Organisational Commitment” International Journal of Business and Social Science, 3(2), pp.56-62.
Callanhan, G. A. (2004) “What Would Machiavelli Think? An Overview Of The Leadership Challenges In A Team-based Structure” Team Performance Management, 10(3-4), pp.77-83.
Cooke, N. J, Duran, J. L, Gorman, J. C., & Myers, C. W. (2013) “Interactive Team Cognition” Cognitive Science, 37(2), pp. 255-285.
Cummings, J. N. (2011) “Geography Is Alive And Well In Virtual Teams” Economic and Business Dimensions, 54(8), pp.24-26.
Diehl, M, & Stroebe, W. (1987) “Productivity loss in brainstorming groups: Toward the solution of a riddle” Journal of Personality and Social Psychology, 53(3), pp. 497-509.
Emans, B. J. M, Vegt, G. S, & Vliert, E. (2001) “Patterns of interdependence in work teams: a two-level investigation of the relations with job and team satisfaction” Personnel Psychology, 54(1), pp.51-69.
Fapohunda, T. M. (2013) “Towards Effective Team Building in the Workplace” International Journal of Education and Research, 1(4), pp.1-12.
Gallenkamp, J. V, Kosgaard, M. A, Assmann, J. J, Welpe, I, & Picot, A. O. (2011) Talk, Trust, Succeed – The Impact of Communication In Virtual Groups on Trust in Leaders and on Performance, Unpublished Working Paper, Ludwig Maximilians University of Munich.
Gassman, O, & Von Zedtwitz, M. (2003) “Trends and Determinants Of Managing Virtual R&D Teams” R&D Management, 33(3), pp.243-262.
Gersick, C. J. G. (1988) “Time and Transition in Work Teams: Toward a New Model of Group Development” Academy of Management Journal, 31(1), pp. 9-41.
Gilley, A, & Kerno Jr, S. J. (2010) “Groups, Teams, and Communities of Practice: A Comparison” Advances in Developing Human Resources, 12(1), pp. 46-40.
Hackman, J. R. (1998) “Why Teams Don’t Work”, in Tindale, R.S ., ed. Theory And Research In Small Groups, New York: Plenum Press, 245-267.
Harkins, S, Latané, B, & Williams, K. (1979) “Many hands make light the work: The causes and consequences of social loafing” Journal of Personality and Social Psychology, 37(6), pp. 822-832.
Hinsz, V. (2015) “Teams as technology: strengths, weaknesses, and trade-offs in cognitive task performance” Team Performance Management: An International Journal, 21(5/6), pp. 218-230.
Hinsz, V. B. Kameda, T. & Tindale, R. S. (2003) Group decision making. In: M. A. Hogg & J. Cooper, eds. Sage handbook of social psychology. London: Sage, pp. 381-403.
Kayworth, T. R, & Leidner, D. E. (2002) “Leadership Effectiveness In Global Virtual Teams” Journal Of Management Information Systems, 18(3), pp.7-40.
Klein, G. (2005) “The strengths and limitations of teams for detecting problems” Cognition, Technology & Work, 8(4), pp. 227-23.
Lewis, T. (2011) “Assessing social identity and collective efficacy as theories of group motivation at work” International Journal of Human Resource Management, 22(4), pp. 963-980.
Lilian, S. (2014) “Virtual Teams: Opportunities and Challenges For E-Leaders” Procedia – Social and Behavioural Sciences, 110(32), pp.1251-1261.
Leonard, B. (2011) “Managing Virtual Teams” HR Magazine, 56(6), pp.39-42.
Matsudaira, K. (2016) “Delegation as Art” Communications of the ACM, 59(5), pp. 58-60.
Moreland, R. L. (2006) “Transactive Memory: Learning Who Knows What in Work Groups and Organisations” In: Small Groups. New York: Psychology Press, pp. 327-346.
Parham, P. A & Muller, H. J. (2008) “Academy of Management Learning and Education” Review of Workforce Diversity Content in Organisational Behaviour Texts, 7(3), pp. 424-428.
Poel, F. M, Stoker, J. I, & Van der Zee, K. I. (2014) “Leadership and Organisational Tenure Diversity As Determinants Of Project Team Effectiveness” Group and Organisation Management, 39(5), pp.532-560.
Purvanova, R. K, & Bono, J. E. (2009) “Transformational Leadership In Context: Face-To-Face And Virtual Teams” Leadership Quarterly, 20(3), pp.343-357.
Schaffer, S. P, Chen, X, Zhu, X, & Oakes, W. C. (2012) “Self-efficacy For Cross Disciplinary Learning In Project-Based Teams” Journal Of Engineering Education, 101(1), pp.82-94.
Smith, D. H. (1967) “A Parsimonious Definition of “Group:” Toward Conceptual Clarity and Scientific Utility” Sociological Inquiry, 37(2), pp. 141-168.
Spicker, P. (2012) “Leadership: A Perniciously Vague Concept” International Journal Of Public Sector Management, 25(1), pp.34-47.
Tuckman, B. W. (1965) “Developmental sequence in small groups” Psychological Bulletin, 63(6), pp. 384–399.
Tuckman, B. W, & Jensen M. C. (1977) “Stages of small-group development revisited” Group & Organizational Studies, 2(4), pp. 419–427.
Uber Grosse, C. (2002) “Managing Communication Within Virtual Teams” Business Communication Quarterly, 65(4), pp.22-38.
Wageman, R. (1997) “Critical success factors for creating superb self-managing teams” Organizational Dynamics, 26(1), pp. 49-61.
Zaccaro, S. J, & Bader, P. (2002) “E-Leadership and the Challenges of Leading E-Teams: Minimising The Bad and Maximising The Good” Organizational Dynamics, 31(4) pp.377-387.

2017-3-2-1488475252

Construction of identity and images of the communion in post-colonial Indonesia: college essay help online

Coexist with colonist; Construction of identity and images of the communion in post-colonial Indonesia.

Subject area, aims and objectives

Subject Area

Britain, Dutch and Japan are the three empires which politically colonised Indonesia for more than 100 years. After gaining their independence in 1945, Indonesia began constructing their identity to state their power and gain recognition from other countries. However, during the transition period, there are possibilities that the formation of identity may produce by doing assimilation with former colonist country by adjusting and adapting the existing colonist identity legacy. The relationship during colonial period between the individual and socio-cultural space is as follows shaped in a dual hybrid position, a hybrid that represents the identity of Indonesian communion.

This research would like to examine the visual representation of Indonesian communion. The visual identity used by the state such as the national emblem, currency design, military crest, and maps–which the latest considered as Western imperial’s science and technology gifts– is a construction of the national identity through the symbolism which represents Indonesia’s in international circumstance.

Aims

Examining the identity formation and transition in pre and post-colonial Indonesia (a decade between 1940– 1950), which may generate some insight about how the hybrid of two visual identities–Indonesia’s and Colonist–coexist and later build the images of the communion in Indonesian minds.
Utilizing graphic design studies to excavate the complexity of identity construction during the postcolonial period which cannot be understood by conventional history narrative.

Objectives

Examine and exploring the founded sources idea, and learn the method of combining the visual identification such as in symbols, colour or visual style which considered share universal visual language and mutual value for both sides.
Deconstruct how the product such as technology and science perceived and accepted by the natives and later adapted it as their identity. (Such as Map)
Experimenting and do iterative process with deconstruction method.
Examine the tools used by the government to certifies the identity and nationality during decades of transition. (contemporary: Passport)
Identify how national identity intertwined and influence with personal details (such as ID Card, Passport) which developed the image as a part of the community.

Historical context

Postcolonial History

The research begins with the studies and history of postcolonialism, which commonly understand as an aftermath of Western colonialism or various form of imperialism, both represented in the historical period or state of affair. However, some argue that, etymologically, postcolonialism frequently misunderstood as a temporal concept; the time after colonialism has ceased, or the time following the politically determined Independence Day on which a country breaks away from its governance by another state. Gilbert and Tompkins (1996) suggested that a theory of post-colonialism must, then, respond to more than the merely chronological construction of post-independence, and to more than just the discursive experience of imperialism. The postcolonial theory thus establishes intellectual spaces for subaltern peoples to speak for themselves, in their voices, and thus produce cultural discourses of philosophy, language, society and economy, balancing the imbalanced us-and-them binary power-relationship between the colonist and the colonial subjects.

The Postcolonialism studies indicate a possible future of overcoming colonialism, anticipating the potential new forms of the global empire and new forms of domination and subordination (Encyclopædia Britannica, 2018).

Postcolonial Theory

Postcolonialism aimed at destabilising these theories (intellectual and linguistic, social and economic) employing which colonialists “perceive”, “understand”, and “know” the world. The postcolonial theory thus establishes intellectual spaces for subaltern peoples to speak for themselves, in their voices, and thus produce cultural discourses of philosophy, language, society and economy, balancing the imbalanced us-and-them binary power-relationship between the colonist and the colonial subjects.

Postcolonial Identity

Decolonized people develop a postcolonial identity that based on interactions between different identities (cultural, national, and ethnic as well as gender and class-based) which are committed varying degrees of social power by the colonial society. In postcolonial literature, the anti-conquest narrative analyses the identity politics that are the social and cultural perspectives of the subaltern colonial subjects—their creative resistance to the culture of the coloniser. How such cultural resistance complicated the establishment of a colonial society; how the colonisers developed their postcolonial identity; and how neocolonialism actively employs the Us-and-Them binary social relation to view the non-Western world as inhabited by The Other.

However, postcolonial theory is somehow problematic. John Lye (1997) argues that while the theory deals with the reading and writing of literature written in previously or currently colonised countries, or literature written in colonising countries which deals with colonisation or colonised peoples. The post-colonial theory focuses particularly on;

The way in which literature by the colonising culture distorts the experience and realities, and inscribes the inferiority, of the colonised people
literature by colonised peoples which attempts to articulate their identity and reclaim their past in the face of that past’s certain otherness.

It can also deal with the way in which literature in colonising countries appropriates the language, images, scenes, traditions and so forth of colonised countries.

Marxist Scholar Vivek Chibber (2013) express that postcolonial theory will remember for its revival of cultural essentialism and its acting as an endorsement of orientalism, rather than being an antidote to it. It is essentialized cultures, painting them as fixed and static categories and presents the difference between East and West as unbridgeable. On his book Postcolonial Theory and the Specter of Capital, Chibber focusing mainly on the strain of postcolonial theory known as subaltern studies. He makes a strong case for why we can — and must — conceptualise the non-Western world through the same analytical lens that we use to understand developments in the West.

Contemporary Context

Nina Katchadourian

Hand-held Subway, Geographic Pathologies, Finland’s Longest Road, Finland’s Unnamed Islands, Head of Spain. 1996-2008.

Various work from Nina Katchadourian which exploring the cartographic works. She deconstruct an existing maps and atlas of New York subway system, Finland’s highway, Spanish paper road map to create a new possibility of creating meaning and generates a new ways of seeing things.

Meta Haven Sealand Identity Project

Meta Haven collaborate on the Sealand Identity Project, which was to conceive a national identity for the Principality of Sealand, which is a self-proclaimed nation on a former war platform near the coast of the UK.

Sealand Identity Project was really a combination of this idea of sovereignty, self-proclaimed nationhood, in combination with this flawed entrepreneurial dream of starting an offshore business onboard Sealand.

Theoretical Context

Critical Theory

History of Politics and Identity

Jonathan Friedman (1994) points out there were two aspects of the relation between social identification and the making of history. The first concerned the relationship between the social conditions of identity formation and the production of culturally viable past. The second introduced so-called scientific constructions of other people’s past into the same frame argument.

On the Journal of the Society for Cultural Anthropology, Friedman (1992, p.41) acknowledges that history and discourse about the making of history are positional, that is, it is dependent upon where one located in social reality, within society, and within the global process. The idea is even applicable to the present discourse, which in no way represents an attempt to stand in some objective truth-sphere above or outside of the goings-on of the world. Objective history, just as any other history, is produced in a definitive context and is a particular kind of project.

Besides, he suggested that the discourse of history as well as of myth is simultaneously a discourse of identity; it consists of attributing a meaningful past to a structural present. Objective history produced in the context of a particular kind of selfhood, one that based on a radical separation of the subject from any particular identity, and which objectifies and textualises reality.

Imagined Community

A country which merely liberated from their former colonist would be struggling in defining their own political identity and build their image of communion. As they build the identity on the top of the ruins of existing colonist structure, it would be unavoidable to eradicate their former identity. Even the previous one is arguably an already hybrid of different cultures. However, it was understood that images of the communion were built not only taking the references from the community itself, but also construct by external influence. Benedict Anderson’s theory regarding the identity of a community would be very fit to depict the condition of emerging, newly independent nation.

Anderson (1983, p.6) defines the nation as, “an imagined political community – and imagined as both inherently limited and sovereign…It is imagined because the members of even the smallest nation will never know most of their fellow-members, meet them, or even hear of them, yet in the minds of each lives the image of their communion”. Anderson sees the nation as a social construct, an “imagined community” in which members feel a commonality with others, feel a “horizontal” comradeship with each other even though they may not know them. It could be said that the lasting appeal and political resilience of nationalism of newly independence nation affirm the strength of patriotic feeling and the enormous sacrifices people have made on behalf of their nation.

In the chapter “The Origins of National Consciousness”, where he argues that the convergence of capitalism, printing, and the diversity of vernacular languages led to the birth of national consciousness. Popular nationalism threatened to exclude the European monarchies from the new imagined communities, as the dynasties had dubious and often conflicting national credentials. They responded with what Anderson terms “official nationalism,” a Machiavellian appropriation of nationalist ideas to secure dynastic legitimacy and suppress ethnolinguistic subject groups within their realms. In the European colonial empires, official nationalism served as a tool of the imperial administration.

Census, Map, Museum.

In the more specific topic, Anderson introduces three institutions of powers– Census, Map, Museum–that profoundly shaped the way in which the colonial state imagined its dominion and the legitimacy of its ancestry. As the research emphasises on more pragmatic visual based identity, the writer considered it would be more fruitful on profoundly examining the Map topic. However, the assumption made after thoughtfully deal with the capacity of the author, which couldn’t afford further research on Census and Museum.

It could be said that the Mercatorian map, which brought in by the European colonisers via print, was beginning to shape the imagination of Southeast Asians, including Indonesia (Anderson, 1983, p. 247) Regarding most communication theories anti-common sense, a map is a scientific abstraction of reality. A map merely represents something which already exists objectively “there”. Anderson (1983) points out, “In the history, I have described, this relationship was reversed. A map anticipated spatial reality, not vice versa. In other words, a map was a model for, rather than a model of, what it purported to represent… It had become a real instrument to concretise projections on the earth’s surface… The discourse of mapping was the paradigm which both administrative and military operations worked within and served”

Map as a Logo

As an administrative and military tool, maps acknowledge the ability as the second avatar of one nation or empire, the map-as-logo. Its origins were reasonably innocent— the practice of the imperial states of colouring their colonies on maps with an imperial dye. British colonies were usually pink-red. French purple-blue, Dutch yellow-brown, and so on. (Anderson, 1983, p. 250) The map becomes a pure sign, no longer compass to the world. As the map then entered an infinite reproducible series, available for transfer to posters, official seals, letterheads, magazine which made them instantly recognisable and visible–the logo-map penetrated deep into the popular imagination, forming a powerful emblem for the anticolonial nationalism.

One of the most known examples of this process is what happened on the island of New Guinea. Dutch Empire settlement in Indonesia was made on the island of New Guinea and succeed to incorporate it into Netherland Indies in 1901 and made it in time for Dutch logoization. Dutch colonial logo-maps sped across in the colony, showing a West New Guinea with nothing to its East, unconsciously reinforced the developing imagined ties among Indonesian nationalist. Even Indonesian nationalist was struggling and made as a national sacred site in the national imagining, they never actually saw New Guinea with their own eyes until the 1960s.

Anderson (1983, p. 251) then relates that “the prestige of the colonial state was accordingly, now intimately, linked to that of its homeland superior.” As more and, more Europeans were being born in Southeast Asia, and being tempted to make it their home. The old sacred sites were to be incorporated into the map of the colony, and their ancient prestige (which, if this had disappeared, as it often had, the state would attempt to revive) draped around the mappers.

The “warp” of this thinking was a totalizing classificatory grid, which could be applied with unlimited flexibility to anything under the state’s real or contemplated control: peoples, regions, religions, languages, products, monuments, and so forth. The effect of the grid was always to be able to say of anything that it was this, not that; it belonged here, not there.

Parallel Theory

Cartography

To provide a profound understanding of Map and its influence on the construction of national identity, the writers realised that the study of cartography is one of the best ways to explain it. While map in the previous point bears the capacity to become a witness of powers, the map also can produce their language. Polish-American philosopher Alfred Korzybski’s theory of general semantics states that; human knowledge is limited by our physical being as well as the structure of language. Though the human experience of reality is limited, yet increasingly see the world through more maps, bigger maps of more data, and more maps of bigger data.

Huffman and Matthews (2014) endorse that, “Cartographers have always been storytellers. This metaphor works well for thematic maps, but topographic or reference maps also tell stories: of the landscape, of the settlement, and of the shape of the natural and human-modified world that surrounds us… Cartographers take data and wrestle it before applying some graphical treatment that provides the narrative. They codify the story in a visual language that they hope speaks to people.”

While cartography has the ability to promoted scientific objectivity over artistic representation and vice versa, the scientific objectivity did not always go the actual representation, a metaphor involved in this work, such map does not always mean the territory. Like any other tools that generate knowledge, maps are informative, but they also can be deceptive, even threatening. At one time or another, it probably safe to say that all of us have been misled by a map designed to hide something the mapmaker did not want us to know, or drawn in such a way that we jump to false conclusions from it.

H. J. de Blij (1996, p. xi-xii) points out that Map crosses the line between information and advocacy. In which later he added that in the world of changing political and strategic relationships and devolving nation-sites, maps become propaganda tools. Some national government even go so far as to commit cartographic aggression, mapping parts of neighbouring countries as their own. Turkish Cypriots, Sri Lankan Tamils, Crimean Russians publish maps that proclaim their political aspirations, fuelling nationalism that spell disaster for the state system.

When the research go further in finding the capability and possibility of a map in manipulating or altering the fact, the research leads to an exciting book written by Dr Mark Monmonier, How to Lie with Maps. In this book, Monmonier (1996, p.2) acknowledges that in showing how to lie with maps, he want to make readers aware that maps–like speeches and paintings–are authored collections of information and also are subject to distortions arising from ignorance, greed, ideological blindness, or malice. The idea seems uncomfortable and uneasy to accepted as it lot of sense of offensiveness. However, he provides a stunning yet straightforward analogy. He offers the idea of the relationship of Map and Scale and its capability on defining the truth.

He took the example as follows; the square inch on the large-scale map could show inch on the ground in far greater detail than the square inch on the small-scale map. Both maps would have to suppress some details, but the designer of the 1:10,000,000-scale map must be far more selective than the cartographer producing the 1:10,000-scale map. In the sense that all maps tell white lies about the planet, the small-scale map has a smaller capacity for truth than large-scale maps.

That is the softball of how maps tell lies, then what about the other possible one? Such as Maps for political propaganda. A good propagandist knows how to shape opinion by manipulating maps. Political persuasion often concerns territorial claims, nationalities, national pride, borders, strategic position, conquests, attacks, troop movements, defences, spheres of influence, regional inequality, and other geographic phenomena conveniently portrayed cartographically. (Monmonier, 1996, p. 87).

People trust maps, and intriguing maps attract the eye as well as connote authority. The map is a perfect symbol of the state and an intellectual weapon–in disputes over territory. Naïve citizens willingly accept as a truth map based on a biased and sometimes crooked selection of facts.

Maps as Symbols of Power and Nationhood

The string of newly independent states formed after World War II, such as Indonesia, revived the national atlas as a symbol of nationhood. In the service of the state, maps and atlases play dual roles. Monmonier (1996, p.89) research confirmed that although a few countries in western Europe and North America had state-sponsored national atlases in the late nineteenth and early twentieth centuries, these served mainly as reference works and symbols of scientific achievement. However, between 1940 and 1980 the number of national atlases increased from fewer than twenty to more than eighty, as former colonies turned to cartography as a tool of both economic development and political identity.

Even tiny maps on postages stamps can broadcast political propaganda. Postage stamps bearing maps are useful propaganda tools for developing nations and ambitious revolutionary movements. In mail interest, it is useful to keep aspirations alive domestically and to suggest national unity and determination internationally. Postage stamps maps afford a small but numerous means for asserting territorial claims (Monmonier, 1996, p. 91). The war claims between India, Pakistan and China offer us an excellent example of this. Official government tourist maps show Kashmir as a part of India, on the other hand as a part of Pakistan. In reality, India controls the southern part of the state of Kashmir, Pakistan controls the northwestern part, and China controls three sections along the eastern margin. The other example is the Ligitan and Sipadan dispute. It was a territorial dispute between Indonesia and Malaysia over two islands in the Celebes Sea, namely Ligitan and Sipadan. The dispute began in 1969 as Malaysia put them on their official passport and tourism map. Thus it was mostly resolved by the International Court of Justice (ICJ) in 2002, which opined that both of the islands belonged to Malaysia as British Empire, their former colonist, has already settled administrative work since 1930 on both islands.

The latest example given probably slightly capture how the state intervenes-wars, colonialism and national planning intertwined on mapping activities. However, these activities of the major powers not confined to their colonial territories, the very existence of which had left them with global rather than local strategic preoccupations. On Maps and Air Photographs, Dickinson (1979, p. 48) states that; stimulated by various motives, among which the discovery of potentially exploitable areas and resources and the complete delineation of boundaries against possible counter-claimants are two obvious ones, most European nations with colonial possessions carried out various surveys in them often very actively. At first, both the maps themselves and the bodies that produced them were slightly varied.

Methodology

The initial stage of the research would emphasize on experimental design, as this approach is a careful balancing of several features including “power”, generalizability, various forms of “validity”, practicality and cost. A thoughtful balancing of these features in advance will result in an experiment with the best chance of providing useful evidence to modify the current state of knowledge in a particular design field. The goal is to actively design an experiment that has the best chance to produce meaningful, defensible evidence, rather than hoping that proper statistical analysis may be able to correct for defects after the fact.

In the realm of experimenting, the deconstructive method would be the fittest one to tackle the question and inquiry of this research. It is a strategy of critical form-making which performed across a range of artefacts and practices, both historical and contemporary. Deconstruction was born to uncover the meaning of a literary work by studying the way its form and content communicate essential humanistic messages.

Lupton and Miller (1994) argue that deconstruction offer the mode of questioning through and about the technologies, formal devices, social institutions, and founding metaphors of representation. That deconstruction belongs to both history and theory. In Derrida’s theory, deconstruction asks how representation inhabits reality. How does the external image of things get inside their inner essence? How does the surface get under the skin?

While examining the construction of the identity of the communion, it is important to trace its source, find the authenticity and telling of a story viewed as a passive record of events. The research foresees to gain a vast amount of result and new insight by studying the meaning of a sign and its relationship to other signs in a system. This principle is the basis of structuralism, an approach to language which focuses on the patterns or structures that generate meaning rather than on the “content” of a given code or custom (Lupton, E. and Miller, J. A., 1994)

How does the theory relate to the practical experimentation?

By experimenting with deconstruction would benefit the research in doing widespread disruption, founded on a challenged and remodelled idea of what existing idea/design can do and bring.

What is the theory for?

As a platform on the iterative process. The fundamental principle of deconstruction and how deconstructive method work help to maintain the system while doing experiment and records thought for future transmissions.

What process of experimentation will be used?

Experimenting by deconstructing existing visual material and try a different approach to generate the possible outcome and utilise the basis of structuralism, an approach to language which focuses on the patterns or structures that generate meaning rather than on the “content” of a given code or custom.

How the project recorded and keep track of what have been done.

Documentation by photograph, video, scanned artefact and scheduled digital/printed publication.

Visual evidence

1st Iteration on deconstruction method: Deconstruction of Indonesia’s National Emblem.
2nd Iteration on deconstruction method: The study of Colonized and Colonist Map.

2018-10-12-1539353375

Niccolo Machiavelli’s The Prince – leadership and power: essay help site:edu

Niccolo Machiavelli’s, The Prince, is one of the most controversial books of its time. Because of its contents, Machiavelli is seen by many as symbol for evil and vice. The book was thought to be so abhorrent that it was banned by the Catholic church, and harshly critiqued by many of Machiavelli’s contemporaries. The Sixteenth Century treatise was meant as an advice book for princes on how to gain power and maintain it, but the methods he proposed for achieving these aims were unsavory to many. In the years following its publication, The Prince, horrified and shocked the general populace due to its challenging of the current view that a leader had to be virtuous and moral, asserting that it was better for a leader to be feared than loved, challenging the idea that a ruler gained his power from divine right alone, and its proposition that a ruler might employ unethical actions to secure his position and better his country.

One of the first of things that Machiavelli tried to do in his treatise is to separate ethics from princes. While, many of his contemporaries believed that a successful prince would be one filled with the usual virtues, like honor, purity, and integrity, Machiavelli threw this idea out a window. He did not believe that being simply having the “right” value system would grant a leader power and security. In fact, he argued that often, being tied down by such morals would be counterproductive to one maintaining their position. Moreover, “if a ruler wishes to reach his highest goals he will not always find it rational to be morale” (Skinner 42).

So, what characteristics did Machiavelli think would actually make a strong leader? His ideal prince is one who is cunning and ruthless. Machiavelli believed that, “a ruler who wishes to maintain his power must be prepared to act immorally when this becomes necessary” (26). A ruler should also not be worried about being miserly, for overall this will help rather than hurt his control (Mansfield). If a prince is too generous his people will also become accustomed to such generosity and be angered when it is not forthcoming, and in the long run he will have to tax his people to make up for what he has given away. Such ideas went directly against the Christian and humanist ideas about morality in Machiavelli’s time.

Another major point of interest that Machiavelli discussed throughout The Prince was the concept of fortune and its role in a princes rule. He believed that it was of the utmost importance that a prince try to win fortune to his side as best he can. Here again, Machiavelli differentiates from his predecessors. Many past philosophers believed that fortune would smile upon a ruler who was just and virtuous. Machiavelli disagreed with such notions. Morales had nothing to do with pleasing fortune. Instead, it was the more violent and ambitious ruler, who would seize the moment, that would have a better chance of winning fortune (Spencer). Machiavelli went so far as to compare fortune to a woman and stated that, “If you want to control her, it is necessary to treat her roughly” (87).

While Machiavelli did not think it was in a prince’s best interest to always be kind and good, he did note the importance of his subjects thinking him to be so. It is very hard to hold control a region, in which the people believe their ruler to be completely immoral. However, they may put up with questionable actions of a ruler if once in a while he does something that appears to be in their best interest. The crueler a ruler is the more crucial it is to appear to the public as being the opposite. Once the people are convinced of a ruler being virtuous, he will be able to get away with the most unscrupulous behavior.

Most people would consider it essential for a ruler to keep his promises and appear trustworthy, maintaining a good relationship with his subjects, not Machiavelli. Sometimes it is not realistic for a ruler to be able to make good on every promise. It may even be better for the people in the long run if he does not. A prince should not have qualms about breaking his word, “plausible reasons can always be found for such failure to keep promises” (Machiavelli 62). Moreover, if a prince prides himself on always keeping his word the people will always expect this. When unfortunate circumstances force him to deviate from what he swore to do, the people will be outraged, whereas if they expect promises to be broken it will not garner as much anger.

Another stable argument of Machiavelli’s book is the power of fear. Machiavelli believes fear is one of the best way to keep subjects in line. Fear is strongest of all the emotions and will give a ruler the most control. Striving for the people’s love is not as fruitful, due to mankind’s fickle nature. Andrew Curry of the Washington Post notes that for Machiavelli, “ Man’s weak nature was a constant as unchanging as the bright sun that rose above his beloved Tuscan hills.” A leader who relies on love to gain loyalty from his subjects, will find his people nowhere to be found when hard times come. Men tend to what they think is best for them, and due to this they will changes sides quickly. They will adopt a new prince quickly and shed their old one if they believe it will be prosperous for them. However, if the subjects greatly fear their leader they are more likely to obey him. If they believe their ruler to be lax they will think they can get away with some disobedience, but if a prince has made it clear that the consequences will be great they will hesitate (Machiavelli .

One of the main ways Machiavelli demonstrates the power of fear, is through generals and their handling of the troops under them. He praises the Carthaginian general, Hannibal, for his ability to lead such a large army of various peoples with little discord or trouble among his troops. Despite going through many lands unknown to his soldiers, and enduring times of trial, Hannibal was able to keep his soldiers in order because of their respect and fear of him (Machiavelli 60). How did Hannibal make his troops fear him? Through great cruelty, which made him the perfect Machiavellian leader. It was this cruelty that was key of his success according to Machiavelli. He argued that, “if he had not been so cruel, his other qualities would not have been sufficient to achieve that affect” (60).

Scipio was another general of the same period as Hannibal. Like Hannibal he was a brilliant military mind, and one of the greatest leaders of the era. Unlike Hannibal however, he did not exercise brutality with his troops to keep them in check. Whereas Hannibal’s troops would have never dreamed of revolting, for fear of the consequences, Scipio did lose control over his soldiers at fort Sucro, in Spain. Machiavelli harshly critiqued Scipio for this mutiny and no one else. It was Scipio’s easiness with his soldiers that had caused them to grow rebellious. Had he have been more severe in his command they would have been better disciplined (Machiavelli 60). Machiavelli praises Hannibal’s cruelty, while condemning Scipio’s friendliness with his soldiers.

Another aspect of the power of fear, which Machiavelli touched on was with the capturing of new regions. Under most circumstances successfully maintaining control over a newly vanquished city, and keeping its citizens in check can be quite difficult. However, in cases where subduing a city takes great force and bloodshed it will actually be much easier to keep. Most would think the opposite to be true, but Machiavelli argues that those who have been defeated will be to imitated to revolt, due to knowing what the conquerors are capable of (Mansfield). Machiavelli has complete faith in the power of fear. Essentially he believes that a prince should not be concerned about being excessively brutal when trying to defeat the defenders of a town, because in the long run it may actually help him keep dominances over said town. With advice like this, advising one to be cruel, it is no surprise that Machiavelli’s contemporaries were so shocked by his treatise (Spenser).

All of Machiavelli’s pondering about fear begs the question how far should a ruler go to be feared by his people? Machiavelli does acknowledge that there is a lined that can be crossed. A prince must strive to be feared without being completely hated by his subjects (Machiavelli 59) . It is fine for a leader to exercise extreme ruthlessness for the greater good as long as he is able to redeem himself in the eyes of the people. At a certain point, if pushed too far, a prince’s subject’s fear of their ruler will turn to anger and they will grow unruly. Therefore it is important for a prince to be calculated with his cruelty, and not just unnecessarily brutal.

A major issue during Machiavelli’s time was that of Divine right to rule. Essentially, king’s could justify their rule by it supposedly being God’s will, and they had to answer only to him. Only those chosen by God could rule. Machiavelli did not fully agree with this doctrine. He thought that almost anyone should have the right to rule as long as they were cunning enough to do so. Machiavelli cares most about leaders being competent. The foxes and lions should rise above the lambs. That is the best way for country to be assured of gaining strong leaders. With divine right there is no guarantee that a prince will be capable of ruling, and do what is best for his people. In his own region of Florence Machiavelli wanted a ruler who was effective, not one that was supposedly endowed by the creator. All of the advice given in the book is a challenge against divine right, as it shows how someone may gain power by his own actions and not divine right.

Machiavelli’s key argument against any sort of right to rule is that it is power alone that guarantees a prince his control. “a Machiavellian perspective directly attacks the notion of any grounding for authority independent of the sheer possession of power. For Machiavelli, people are compelled to obey purely in deference to the superior power of the state” (Nederman). Simply having the right virtues, divine right, or any other quantifiers of rule do not matter if one does not have true power. A prince’s subjects will stay in line if they know he has great power over them, but not always so if he his relying on their respect of his “divine right” alone.

One of the main themes running throughout all of Machiavelli’s advice seems to be that the ends always justify the means. Now even though Machiavelli never directly states this, he comes very close, and despite his advice being a bit more nuance than that simple phrase, it is not out of line to say that it represents his key ideas on princeship. Machiavelli was one of the first pessimistic realists of his time, and he based his advice on the negative side of humanity. He argued that a prince’s subjects will not always do the moral thing and so a prince should not either. Instead, he should take what actions he believes to be best for securing his rule and his province. Sacrificing a few is a necessary evil if it guarantees the safety of many (Machiavelli 58).

Machiavelli base much of his advice on the topic on real life rulers of his time. History.com points this out saying, “Machiavelli’s guide to power was revolutionary in that it described how powerful people succeeded—as he saw it—rather than as one imagined a leader should operate.” While his contemporaries where dreaming up the qualities of an ideal leader, Machiavelli believed he was giving a guide based on those he had seen be successful. Almost all of the leaders Machiavelli studied, he found to have exercised cruelty and brutality. Mansfield says thus of Machiavelli’s points on necessary evil, “The amoral interpretation fastens on Machiavelli’s frequent resort to “necessity” in order to excuse actions that might otherwise be condemned as immoral.”

One of the main ruler’s who Machiavelli based much of his advice on was Cesare Borgia. Borgia was the perfect Machiavellian leader. He was, “a crude, brutal and cunning prince of the Papal States” (History.com Editors). He lived in a chaotic time, and the entirety of his rule was face with challenges and uncertainty. Machiavelli admired his ability to handle the problems of his times with such decisive ferocity. He embodied all the traits the Machiavelli was advising the readers of his book to adopt.

Cesare was a man with many enemies and part of his genius lay in his ability to get rid of them. Where others would hesitate to move against powerful men, Borgia did not. He would kill remorselessly if he thought it would help him maintain his land. One of the main examples Machiavelli used to point out Borgia’s cunning, was his luring of the Orsini leaders to the town of Senigallia. He lured them with lavish gifts and lulled them into a false sense of security, promising treaties of peace, but once they had delivered themselves into his hands he killed them (Machiavelli 25). Machiavelli praised this exploit thinking it an exceptionally clever deception.

Borgia also proved his competence as a leader to Machiavelli in his handling of the land he inherited from his father, Pope Alexander VI. The people dwelling there were disorderly and defiant. They had not been well disciplined by their previous ruler, and were not used to really having to obey a leader. Borgia set out to right this wrong. He put an utterly ruthless man, Remirro de Orco, in charge of the area (Machiavelli 26). Many rulers would have told Orco to use caution when dealing with the subjects of the region. He should slowly begin to discipline them so that they would grow use to it over time. However, Borgia did the exact opposite. He gave his new governor complete control to be as severe and merciless as he saw necessary. He new that the cruelty the people would endure under de Orco would be for the better down the road as there would be more order and less lawbreakers.

Even the he knew that it was necessary to use brutality when dealing with his newly acquired land, Borgia did not plan on taking the blame for that cruelty. de Orco’s harsh regime had served to bring discipline to the region, but Cesare Borgia was not blind to the growing anger in those who were suffering under it. Here, in Machiavelli’s mind, Borgia showed his true genius and heartlessness. He killed de Orco and displayed his body in a town, successfully wining the favor of his subjects and getting rid of a possible rival. It Borgia who had put de Orco in charge in the first place, knowing fully well that he was a cruel man, and told him to be a harsh ruler, but the people seemed to forget this and saw Borgia as a hero for killing their oppressor. Those subjects who still had a dislike for Borgia, where too terrified by the execution to cause any discord (Machiavelli 26). So Borgia was able to make his people both love and fear, Machiavelli’s ideal situation. It is clear that much of Machiavelli’s arguments for doing immoral things comes from him having observed Borgia and his callous methods.

Borgia may have been brilliant in the handling of his lands and his enemies, but it was not his own cleverness that gained him his territory in the Romagna. Instead it was the cunning of his father, Pope Alexander IV. Alexander wanted to give his son a state in Italy to help him grow more powerful and, hopefully, eventually make him into a great ruler. However, he knew that he would not be able to do this through peaceful negations, as there were too many other factions who would have been opposed to it. Instead, the Pope would have to use force to size a state. First he sought out to make the states of Italy unstable, by aiding a French invasion of Milan. Doing this helped cause chaos, and the French gave the Pope troops to conquer the Romagna with. The Pope was able his transfer the newly captured states to his son (Machiavelli 24). These actions by the Pope where highly immoral; he helped sow ruin in his own country of Italy to gain a province for Cesare to rule, and he misused the power given to him by his position as Pope to do so. However, Machiavelli praises his ability to take actions that are deemed unethical by society to attain success.

In one chapter of his treatise, Machiavelli addresses those who gained the power from evil deeds. The first example he gives is of Agathocles, of Syracuse. Agathocles is the epitome of doing whatever it takes to get what you want. He was a mere ordinary man, but by his own actions he was able to rise to a position of power in the city of Syracuse. Wanting to become the king of Syracuse he began scheming how this could be accomplished. Eventually he was able to execute a successful coup, and have all his soldiers kill any opposers. He was dishonorable, a murderer, and a traitor, but he did achieve what he set out to do. Machiavelli does point out that these methods wont exactly win someone glory and fame, or at least not the positive kind, but he did commend Agathocles ability to gain power. He also mentions that Agathocles used evil “well” since he had to use it at all (Machiavelli 30-33). Statements like this, that a murdering traitor used evil admirably, are what make Machiavelli’s writing so controversial.

Machiavelli did not stop with Agathocles, he also gave an example more current with the time of a similar situation. Oliverotto of Fermo. Oliverotto had the same cunning and ambition as Agathocles. He too wanted to become the ruler of his hometown Fermo. So, with his mentor he conspired to overthrow the current ruler, his own uncle, Giovanni Fogliani. Oliverotto used his relation to Fogliani to lure him into a trap where he assassinated him, as well as the other leaders of Fermo. With no one else in his way he took control of the region. His immoral actions would have been condemned by most, but Machiavelli’s main issue seems to be that he was not able to keep the power that he gained, as he was killed himself later on. Oliverotto did not use evil well as Agathocles did (Machiavelli 32-32).

Few books have the ability to stir up as much controversy as The Prince. With it Machiavelli tried to set a new example how a prince should act and think, but one that would be found troubling by many in the decades that followed its publication. Its readers would shun it, ban it, mock it, and even go so far as to say that it was satire, because surely there was no way that Machiavelli had actually meant what he wrote. The main cause of all animosity towards the book, came from Machiavelli’s attempt to separate ethics from politics. In the treatise he argued that princes need not be virtuous, and that fear was a great tool to be used to control one’s subjects, better even than love. Furthermore, the book challenged divine right, which put at odds with the churches of the time, and lastly, it promoted the idea of using scrupulous methods to gain power. It is the combination of these four arguments, that were so against the current ideologies of the sixteenth century, that caused many to look at the book with disgust, and the reason why Machiavelli became known as an embodiment of evil.

2019-2-12-1549980013

Organizational change – responding to internal drivers

Organizational change in any business organisation is predominantly influenced by two forces called internal drivers and external drivers. Both can cause favourable as well as unfavourable impacts on organisational change. However, this essay will argue that it is more beneficial for the organisations to introduce changes based on its internal drivers because they are within the organisation and control of the management in bringing the change. Where as the external drivers are beyond the control of the organisation.

In this intensely competitive and globalised world ( Mdletye, Coetzee and Ukpere 2014) of business and management organisational change is very critical and indispensable for numerous competitive advantages. Therefore, companies of all kinds must either initiate change or if not face the natural death (Kotter and Cohen 2008). Hence, although change is task fraught with complexity and challenge (Graetz et al. 2011, p.2) it has become the inevitable phenomena for the successful survival of organization in this modern world.

Organisational change is the continuous process of renewing the firm direction, structure, capabilities, operations, systems and processes to meet the ever-changing needs of external and internal customers (Soosay and Sloan 2005 p.10). It is the movement of an organization away from its present state of status quo (Smith 2005) toward some desired future state to increase its effectiveness (Lunenburg 2010). Nevertheless, as most researchers have found out that, in reality, adopting new changes in the organisation is very difficult and doubtful of success (Robbins 2003 and Raftery 2009 as cited in Beshtawi and Jaaron,2014; p.129) and often land up with failure (Olaghere,n.d p.1; Gilaninia, Ganjinia and Mahdikhanmahaleh 2013). Therefore, in this increasingly uncertain and risky environment (Zhou, Tse and Li, 2006, p.248) it is very crucial to know how to adapt and change according to the environment and to change successfully has become a critical and timeless challenge for any organization ( Feldman, 2004; Pettigrew et al., 2001; Piderit, 2000) for continuous survival and success.

Organisational Change in an organisation is influenced dominantly by two factors called internal factors or internal drivers and external factors or external drivers (Esparcia and Argente (n.d) and Olaghere n.d, p.1).These factors are responsible for triggering the change in the system, policies, product, structures, services, management, performance among many other areas in the organisation (Senior 2002) (as cited in McGuire and Hutchings 2006). Ivancevich and Matteson (2002) consider technology, economic forces and socio-political and legal factors as important external drivers that cause organisational change. However, they argue that these external drivers of change are beyond management’s control and cause a significant impact compelling the organisation to adjust internal processes and systems (McGuire and Hutchings 2006). Conversely, internal drivers are those forces existed within the organisation that influence changes. They are system, structure, management style, leadership, resources, processes, products of the organisation (Esparcia and Argente, n.d).

However, internal factors are more critical to driving organisational change. Ivancevich and Matteson (2002) maintain that human resource issues and process considerations are the most common forces for change within the organisation. They argue that internal factors are generally within the control of management, although sometimes be more difficult to recognise and diagnose than external factors (McGuire and Hutchings 2006).

The external factors are more diversified and intractable compare to internal drivers (Yu and Zhang, 2010, p.3). The internal divers of change are easily influenced by external environments like politics, economy, technology, legal and society.

The external factors helps to determine the opportunities and threats that the company would face, but the internal factors help the company to identify its strengths and weaknesses (Ibrahim and Primiana, 2015, p.285) . Marcus (2005) (as cited in Ibrahim and Primiana, 2015, p.285) noted that organisations should be aware of its strengths and weaknesses and analyzed the extent to which companies can accommodate the opportunities and threats existed in its the external environment.

Anderson and Anderson (n.d) asserted that the most common reason for the failure of managing change with the organizations is the inadequate attention to the less tangible, yet very important, internal drivers such as culture, leader and employee behaviour and their mindset. So, it is very much evident the benefits of concentrating of internal drivers rather than external drivers. This is supported by Kotter and Cohen (2008; p.61) that managers must instigate change by creating the sense of urgency by touching the emotions of employees instead of reasons based on facts and figures. This is possible only through change in internal factors of business enterprise.

Many scholars have consented that internal factors are the key determinants of an organization’s performance (Kinyua-Njuguna, Munyok and Kibera 2014, p. 289) as they provide enabling environment to achieve its goals and objectives. Internal environmental forces provide strengths and weaknesses to the business (Tolbert & Hall, 2009) (cited in (Kinyua-Njuguna, Munyok and Kiber 2014, p.) Fr example, from their study on the effect of internal drivers on community-based HIV and AIDS organizations in Nairobi County, Kenya, Kinyua-Njuguna, Munyok and Kiber (2014) found out that the internal drivers such as organisational structure, strategy, skills, staff, shared values as well as systems helping the organisation to achieve their objective. As a result enhanced the employee performance.

The Resource-based view (RBV) theory, propounded by Penrose (1959) ( as cited in Kute & Upadhyay, 2014,p.68) supported that organizations can gain competitive advantage by concentrating on their internal factors such as abilities, skills, knowledge, capabilities and competencies with reference to technological changes. This is because of strengths and weakness in these areas can be managed and thus the need of enhancing these qualities within the employees can be determined and can be enhanced through continuous organizational learning culture. Furthermore, the following factors such as mission and goals, leadership quality, organisational structure, human resources, technology capacity, organisation culture, employees behaviours and attitude, and organisational performance has to be considered while introducing change in the organisation.

Organisation Vison, Mision, Goals and objectives

Every business organisation is being guided by its mission, goals and objectives pertaining to development philosophy and direction, planning, prioritizing programs, policies, management, organisational structures and everyday responsibilities (Emeka and Eyuche 2014). In nutshell the performance of the company depends on the mission, goals and objectives. Therefore, change in these domains would compel the firm to undertake organisational change to achieve their mission and objectives.

Leadership

Leadership is one of the very important internal factors in an organisation change (Lunenburg 2010). The leaders have the important role in maintaining the measure of control over the environment of the organisation (McGuire and Hutchings 2006, p.197). The sixteenth century political scientist, Niccolo Machiavelli, stressed that the leader’s vision and future plans are critical in determining the shape and structure of the organisation (McGuire and Hutchings 2006, p.198). According to the organisational change models Cummings and Worley (1993) further recognizes that any change can be implemented successfully only by strong leadership who can garner commitment and readiness to change within the employees through shared vision and strategies to achieve the proposed new change and outcome. The way the managers or leaders establish the internal working structure and systems has influence on the performance of the organisation (Kinyua-Njuguna, Munyok and Kiber 2014, p.285).It means the structures and systems should be very favourable for the employees to work collaboratively everyday towards the shared goals of the organisation. Conversely, poor leadership and management would result in the failure of enterprise in the implementation of change processes and risking the orgainsation to disastrous consequences (Shiamwama, Ombayo and Mukolwe 2014, p.148). Effective leaders help organisations to surpass any internal obstacles and bring changes through envisioning the desired goals and objectives, energizing the employees, and enabling the resources and conditions (Zhou, Tse and Li 2006, p.253) which are paramount to overcome any external inhibitors of change and improve performance.

For instance, Steve Jobs, the founder of APPLE Computers, was eased out of the business because of poor management. He later went back into the business and was absorbed as a mere employee just to tap his original idea (Cole, 2004). in (Shiamwama, Ombayo and Mukolwe (2014)

Organisation Structure

Change in organizational structure involves redefining and regulating the organizational roles and relations by expanding or reducing audition, determining the decision making authority, selecting decentralised or central management type, regulating communication channels within the organisation ( İkinci, S.S.2014,p.123).It is another internal factor that act as driver of change. It is the way how jobs are allocated, coordinated and supervised through the system that facilitates communication and efficient work processes among the employees in the organisation (Elsaid, Okasha and Abdelghaly, 2013, p.1). In fact the successful execution and implementation of any plans and programs depends on it. The flat bureaucratic structure with decentralised decision-making system and horizontal reporting system among the teams and various managers are more preferred by the employees (Ohlson, 2007).This fosters faster and effective decision and action thus enhancing the efficiency and productivity of the employees and organisation as whole. The tall hierarchical system of organisation characterised by long bureaucratic steps to follow in execution and communication is rather a hindrance to the effectiveness of the performance (p.23). Decentralised administrative structures and processes thus enable a firm to better meet the new environmental conditions and effectively handle environmental turbulence (Damanpour and Evan, 1984)

Human resources

Human resource in the organisation consists of the knowledge, skills, competencies, attitude and behaviours the workers possess ( İkinci, S.S.2014, p.123). Nurturing theses aspect of human resources will lead to personal growth and development which can alter an individual’s perceptions of organisational change, reducing the level of resistance (Bovey and Hede, 2001, p.546). It is the very critical asset that helps organisation to gain competitive advantage (Husso and Nybakk, nd, p.9).This is because they have the capacity to operate all the activities and in turn help to achieve the aims and objective (Mdletye, Coetzee, and Ukpere (2014) which otherwise would not be able to function at all. The researchers emphasized that human resource is the most important aspect, indeed the backbone of every organization and it is also the main source of resource for the effective function of the organization ( Wanza and Nkuraru,2016; p.192) and main strategic resource to gain sustainable competitive advantage in this age of globalization(Kute & Upadhyay, 2014). For example, the management’s emphasis on the human resource management such as employing highly skilled and educated people, providing professional training and encouraging learning from advanced technologies and skills made the employees more competent to achieve Huawei’s internationalization process more successful ( Yu and Zhang, 2010, p.23) .

Organisational culture

Organisational culture is defined as the values, beliefs, norms, customs and behaviours that guide the employees towards the common goals (Awadh & Saad, 2013,) and that set the rule of decision making processes, structure and power (Wambugu, 2014, p. 80). Wambugu (2014) further noted that organisational culture empower the employees to do thing which deemed right and rewarding both at personal and organisational level. According to Wanza and Nkuraru ( 2016, p.195) and Awadh & Saad, (2013, p.168 ) organisational culture has strong bearing on the performance of the employees which is considered as the backbone of development of the organisation. The culture established as system in the organisation enhances employees’ commitment thus improves their input eventually achieving the desired productivity and profitability (Wanza and Nkuraru, 2016, p.193). They concluded from their research that a strong organizational culture acts as the source of synergy and momentum for teamwork and uplift employee performance (p.197).Thus it is worthy of developing organizational culture for sustainable future. For example, one of the internal factor that drive Huawei Technologies Company, a very small local IT company of China, to very successful internationalisation was the corporate culture, such as team work, adaptation, learning and customer-oriented service, being embedded in the behaviours of the Huawei’s employees ( Yu and Zhang, 2010, p.23)

Innovation culture

Innovation is the main strategy to adapt to change, overcome organisational weaknesses, and add value to organization’s products and services in the ever-changing business environment (Sund 2008, p. 2). Being entrepreneurial with creativity and innovation helps organisation to gain competitive advantage (Ireland et al. 2003). Abdelgawad et al. (2013) proposed that entrepreneurial capability is instrumental for realizing a firm’s game-changing strategies for sustainable success in future. For example, Google, Amazon and Apple companies were once just start-ups grown to attract global market through their innovation (EBRD, 2014; p.1). Internal organizational drivers such as resources, experimentation, collaboration, administrative support play a significant role during this innovation process (Agolla and Van-Lill, 2013). So, establishing innovative culture in an organisation will drive the organisation towards favorable and successful change.

Attitude and Commitment

Most of the researches have shown that employees need to develop their attitude and behaviours for successful organizational performance (Bernerth, 2004). Therefore, it is indispensable for the organizational managers to develop and nurture employees’ commitment towards embracing change by bringing positive change in their attitude and behaviour. However, Anderson and Anderson (n.d) stressed that employees’ mindset, which is the root cause of one’s feelings, decisions and actions, has to be changed to bring organizational change. When introducing change people aspect is more critical than just about changes in systems and processes. Rather it is about people believing in change and wanting it to happen (Soosay and Sloan (2005 p.4). Since organisational change requires the participation of people, those involved must first undergo personal change for the success of organisational change (Evans, 1994).

Organisation Performance as drivers

Both the present and past performance are also drivers of organisational change. Some earlier researchers have pointed out that poor performance, that creates the gap between managerial aspirations and achievements, is an extra impetus for the firms to improve further (Greve, 1998; Tushman and Romanelli, 1985). On the other hand some researchers argue that successful companies continuously draw motivation from their success to improve and perform better for sustainable future, especially they face an uncertain environments ( Feldman, 2004; Tsoukas and Chia, 2002).in (Zhou, Tse and Li, 2006). The better a firm performs, the more likely it will invest in new product development and technology advancement to achieve a sustainable competitive advantage (Zhou, Tse and Li, 2006; p.251). As Brown and Eisenhardt (1997) observe, many successful firms, such as Intel, 3M, Hewlett-Packard, and Gillette, have undertaken constant, rapid changes, particularly in their new product development. For example companies like Apple, Microsoft and Samsung companies have undergone continuous rapid changes in development of new product.

Conclusion

The main purpose of this essay was to prove the advantages of responding to internal drivers than to external drivers while introducing change in the organisation. From this study it was found out that internal drivers are within the organisation that has direct impact on its everyday performance. Therefore, they are within the control and management capacity of the organization. If the internal performance, system, culture and resources of an organisation are excellent it is certain that any obstacles posed from the external environments can be nullified leading to very successful organizational change. Whereas external drivers are existed in the external environment of the firm and those are beyond the control and reach of the organisation. Yet, they can affect the internal functions of the organisation causing instability. Hence the external drivers are not to be undermined rather internal drivers must be activated towards meeting change in line with external drivers.

2016-11-25-1480063659

The Classical World: essay help online

The Classical Era, which flourished from the 8th century BC to the 5th century AD, saw the birth and spread of Greco-Roman ideas. These ideas became the basis for western civilization and laid a foundation of culture that has remained as relevant now as it was in ancient times. Ancient Greece, and later Ancient Rome, cemented their own ideals in the universal consciousness as the cultural standard to which all later societies were held to, and continue to shape contemporary perspective on art, architecture, and government, and other facets of modern society. Despite the core differences of modern and classical times and the centuries that have passed since, the knowledge and perspectives passed down by the Ancient Greeks and Romans remain an essential part of contemporary society and culture, while inspiring western civilization’s greatest accomplishments.

The cultural impact of Ancient Greece and Rome begins most tangibly with the Renaissance, a movement beginning in Florence and spanning through the 14th and 17th centuries. This period is seen as a revival of classical antiquity, with Renaissance scholars, artists, philosophers, and writers attempting to emulate what they considered to be a “golden age,” taking inspiration directly from their Greco-Roman forefathers, with their presence increasingly regarded as an intellectual heritage to be mined for contemporary use. The Florentine author Niccolò Machiavelli, for example, described his nightly retreats into his library in these memorable words:

“At the door I take off my muddy everyday clothes. I dress myself as though I were about to appear before a royal court as a Florentine envoy. Then decently attired I enter the antique courts of the great men of antiquity. They receive me with friendship; from them I derive the nourishment which alone is mine and for which I was born. Without false shame I talk with them and ask them the causes of the actions; and their humanity is so great they answer me. For four long and happy hours I lose myself in them. I forget all my troubles; I am not afraid of poverty or death. I transform myself entirely in their likeness.”

Francesco Petrarca, commonly anglicized as Petrarch, was a scholar who rediscovered the letters of Cicero, a Roman statesman, orator, lawyer and philosopher and one of Rome’s greatest orators and prose stylists. This rediscovery is considered to have initiated the Renaissance, as scholars became interested in learning how the ancients developed their human faculties, powers, and culture, and in turn attempted to apply their findings to their contemporary societies. Through this discovery, Petrarch became the “Father of Renaissance humanism,” humanism being a Renaissance cultural movement that turned away from medieval scholasticism and revived interest in ancient Greek and Roman thought. Petrarch firmly believed that classical writings were not just relevant to his own age but saw in them moral guidance that could reform humanity, a key principle of Renaissance Humanism. The humanists of the Renaissance believed that their mission was to revive the high Roman style of writing pure and eloquent Latin. When that flourished, they believed, art would as well.

The republican elites of Florence and Venice and the ruling families of Milan, Ferrara, and Urbino hired humanists to teach their children classical morality and to write elegant, classical letters, histories, and propaganda. Eventually, the humanism inspired by the study of the Greco-Roman world would bleed into the Catholic Church, a formidable and almost omnipotent deity of the Middle Ages. In the course of the fifteenth century, the humanists convinced most of the popes that the papacy needed their skills. Sophisticated classical scholars were hired to write official correspondence and propaganda to create an image of the popes as powerful, enlightened, modern rulers of the Church and to apply their scholarly tools to the church’s needs, including writing a more classical form of the Mass. Scholars wrote Latin letters and histories on behalf of the popes, and they even tinkered with the church’s traditional liturgy, trying to make prayers and hymns attractively classical. Humanist secretaries and popes wrote dazzling Latin. Though humanism, and therefore classical thinking, never truly permeated the Catholic Church fully, there was an influence of Ancient Greece and Rome on the Church and its leaders.

An easier and far more blatant appreciation of classical antiquity was seen clearly in the art and architecture of the Renaissance. Contrapposto, a sculptural scheme which was revived during the Renaissance, was originated by the Ancient Greeks. It is used when the standing human figure is poised in such a way that the weight rests on one leg (called the engaged leg), freeing the other leg, which is bent at the knee. With the weight shift, the hips, shoulders, and head tilt, suggesting relaxation with the subtle internal organic movement that denotes life. The Greeks invented this formula in the early 5th century BC as an alternative to the stiffly static pose—in which the weight is distributed equally on both legs—that had dominated Greek figure sculpture in earlier periods. Italian Renaissance artists such as Donatello and Andrea del Verrocchio revived the classical formula, giving it the name contrapposto, which suggests the action and reaction of the various parts of the figure, and enriching the conception by scientific anatomical study.

Donatello borrowed from the ancients with his bronze sculpture of David, the biblical hero known for defeating Goliath. Donatello’s David was the first freestanding bronze cast statue of the Renaissance era as well as the first nude sculpture of a male since the classical sculptures of ancient Greece. In Middle Ages, nudity was not used in art except in certain moral contexts, such as the depiction of Adam and Eve, or the sending of souls off to hell.  In the classical world, nudity was often used in a different, majestic context, such as with figures who were gods, heroes, or athletes.  Here, Donatello seems to be calling to mind the type of heroic nudity of antiquity, since David is depicted at a triumphal point in the biblical narrative of his victory over Goliath. In any case, Donatello’s David is a classic work of Renaissance sculpture, given its Judaeo-Christian subject matter modeled on a classical sculptural type.

Another artwork inspired heavily by ancient antiquity would be Botticelli’s painting titled, Birth of Venus. The theme of the Birth of Venus was taken from the writings of the ancient poet, Homer.  According to the traditional account, after Venus was born, she rode on a seashell and sea foam to the island of Cythera.  In the painting, Venus is prominently depicted in the center, born out of the foam as she rides to shore.  On the left, the figure of Zephyrus carries the nymph Chloris (alternatively identified as “Aura”) as he blows the wind to guide Venus. On shore, a figure who has been identified as Pomona, or as the goddess of Spring, waits for Venus with mantle in hand.  The mantle billows in the wind from Zephyrus’ mouth.The story of the Birth of Venus is well described below by a Homeric hymn but its relevance to the painting is disputed as the poem was only published, by the Greek refugee Demetrios Chalcondyles, in Florence in 1488 (five years after the painting was completed as a wedding gift for Lorenzo di Pierfrancesco de’ Medici in 1483).

Of august gold-wreathed and beautiful

Aphrodite I shall sing to whose domain

belong the battlements of all sea-loved

Cyprus where, blown by the moist breath

of  Zephyros, she was carried over the waves

of the resounding sea on soft foam.

The gold-filleted Horae happily welcomed her

and clothed her with heavenly raiment.

The model for Venus in this painting has traditionally been associated with Simonetta Vespucci – who had been a muse for Botticelli,  and was seen as the model for female beauty throughout Florence – especially for the Medici family for whom this painting had been created. There is added credence to this suggestion from the fact that she was born in the Ligurian fishing village of  PortoVenere – called Port of Venus because there was a little Temple to Venus there from 1st Century BC.

The other model for the pose of Venus in the painting was possibly the Medici Venus, a first century BC statue depicting Aphrodite in a Venus pudica pose. It is actually a marble copy of an original bronze Greek sculpture that Botticelli would have had an opportunity to study whilst visiting the sculpture school or the Platonic Academy which flourished at the family home of the Medici in Florence.

The demand for this type of scene, of course, was humanism, which was alive and well in the court of Lorenzo d’Medici in the 1480s.  Here, Renaissance humanism was open not only to the use of a pagan sculpture as a model, but also a pagan narrative for the subject matter, and although the Birth of Venus is not a work which employed Renaissance perspectival innovations, the elegance of the classical subject matter was something that would have intrigued wealthy Florentines who patronized this type of work.

The discovery of particular texts had enormous implications on Renaissance architecture. For example, with the discovery of the works of Vitruvius, an architect at the time of Augustus, there was an explosion of interest in ancient building. Vitruvius wrote an extremely important volume, De architectura libri decem (Ten books on architecture), where he introduced three principles to architecture: Firmatis  (durability), Utilitas (utility), and Venustatis (beauty). Vitruvius talked about ancient buildings in a very significant way, not only in terms of practicality, but in an abstract way which emphasized what the buildings represented in both art and society. Similarly to how ancient texts could be applied to the values and aesthetics of contemporary Italians in the 15th century, so could ancient buildings be reduced to an essence, or a set of principles and ideals, that could be applied to the needs of 15th-century Italians, despite their differences from 1st-century Romans.

In particular, we can see in the career of Leon Battista Alberti, who was born in 1404 and died in 1472, how these ideas could be distilled into a set of principles that could apply to the conditions of the Italian world. Alberti wrote De re aedificatoria, or On Building. His work can be considered highly derivative, but Alberti’s purpose was quite different: to take an ancient text and apply it to the needs of his own time. Not only did he write a theoretical treatise on architecture, but he then went out and built buildings. In particular, in Florence, he designed the facade of the Palazzo Rucellai from 1452 to 1470, in which, again, the Vitruvian orders appear and in which the ideas of ancient building are made useful to a Florentine palace for a wealthy merchant.

In the more modern world, there is a wealth of Greco-Roman influence over the inception of the United States of America and its government. For example, the men who inspired the American Revolution and wrote the American Constitution were heavily influenced by the classical Greek and Roman world. The American founding fathers were well educated individuals, and they all had significant experience with ancient Greek and Roman authors since childhood. Historian Bernard Bailyn states, “knowledge of classical authors was universal among colonists with any degree of education.” Thomas Jefferson, writer of the Declaration of Independence, was taught Greek and Latin from the age of nine, and Benjamin Franklin received instruction in Latin at grammar school and became proficient in both Latin and Greek later in life. In Franklin’s Autobiography, frequent references are made to classical western figures, such as Cicero and Cato. James Madison learned Greek and Latin as a child, and “immersed himself in the histories of Greece and Rome.”

With classical schooling such an integral part of the founding fathers’ education, America’s first political leaders studied the works of the great Greek Philosophers, including Plato and Aristotle. Polybius, a less celebrated but still influential thinker, also left his mark upon the American framers of the Constitution. Through Polybius, the founding fathers were introduced to the Roman Republic as the “mixed government” described by Plato and Aristotle. They used Greek philosophy and the model of Roman Republican government in order to form a new nation based on ancient principles.

Philosophers from classical Greece proposed the separation of powers in government, an idea that the American founders adopted for their new nation. In addition, The Roman Republic  (509-27 BC) served as a direct model of government for the writers of the constitution.  Greek and Roman political thought was critical in shaping the government of the United States of America.

Plato writes that that a strong state should contain elements of both democracy and tyranny, so that the state has a mixed government. His political philosophy, particularly his idea of a “mixed” constitution, would have far reaching effects among later philosophers. His mixed government would ultimately be brought to life in the American Constitution.

Aristotle believed that a mixed government, like the one described by Plato, would halt the decline of government into anarchy. In Aristotle’s mixed constitution, defined in his work The Politics, there were to be three branches of government: “All constitutions have three elements, concerning which the good lawgiver has to regard what is expedient for each constitution…There is one element which deliberates about public affairs [“legislative” branch]; secondly, that concerned with the magistrates [“executive” branch]…and thirdly that which has judicial power.”

This three-tiered mixed government of Aristotle would ultimately find its way into the Constitution. Aristotle also established the principle that the rulers of a state should be subject to the same laws as the rest of the populace; to Aristotle, the rule of law is better than the authority of “even the best man.” This concept of a “ruling official subject to the law” is an integral idea to modern government, where all political figures are supposed to be subject to the same legal code as the average citizen.

In addition to the foundation of government inspired by the ancient world, the influence of classical antiquity can be seen in some of America’s most iconic architecture. Prevalent between about 1780 and 1830, Federal style drew inspiration from the Greco-Romans. The influence of Ancient Greek architecture is apparent in the use of columns and colonnades. Thomas Jefferson was an architect during the Federal period, and he designed not only his own home, Monticello, but the campus of the University of Virginia in Charlottesville in this style.

Greek Revival architecture also became widespread in the U.S., and in the middle of the 19th century it became known as the national style, as it was used extensively in houses and smaller public buildings of that time. This style generally featured the Doric Order in larger buildings, and simpler Doric columns topped with a small pediment (without a frieze) in houses. The first major public building built in this style was the Second Bank of the United States, built in Philadelphia between 1819 and 1824, though most famous is the Lincoln Memorial, its exterior echoing that of the Parthenon.

The heritage of the classical world has been one which later societies have taken and made relevant to their own contemporary aesthetics, visions, and ambitions. From the Renaissance to the formation of the United States, Greco-Roman ideals have paved the way and inspired art, architecture, and civic duty, all the while remaining the standard for which culture strains to meet. Despite its antiquity, the classical world has remained both relevant, adaptable, and innovative, inspiring some of western civilization’s greatest feats.

2018-12-4-1543886056

Power dynamics in psychotherapy – reflective literature review

Choice of topic

On receiving the assessment paperwork for my client, I felt overwhelmed and challenged by her status, and that she had previously worked with my placement director. My first reaction, was that I would not be good enough for her as a trainee.

When discussing my responses with my supervisor, she helped me to identify where this had come from, and the skills and knowledge that I had would be beneficial to this client.

To build up a working alliance, Finlay (2016), p.15, with this client, who I will refer to as Kirsty, (not her real name), the progress was slow, and I became very aware of my own counter- transferential feelings. There were areas of her narrative which I felt really in contact with.

Conducting the search

An on-line search Google scholar, using terms like, ‘The Dance of Power’ which returned results of 51, 200,000. Further searches were conducted which brought back similar figures

I then altered the search criteria to ‘The Dance of the Counter-transferential Phenomena’ which brought back 34 Items and this search was done via Wiley on-line Library. This appeared more manageable, and a further search via the same library with a different search term, ‘Undoing Trauma’ brought back just one result. This still was not what I was looking for, so I chose to remain with the search criteria of power within the therapeutic relationship.

So, within the literature, Webster’s dictionary defines power as; ‘the ability to act’ and ‘the capacity to produce and effect’ and ‘the possession of control, authority or influence over others’

Proctor (2017) states how she defines power as being related to how society is formed, and groups of people, who differ from the ‘norm’ have less access to power. These groups could be women, disabled, Black minority ethnic (BME) or working-class people, gay or lesbians. Male or females, young or old.

She suggests that these groups could be oppressed members of society who may have experienced violence or intimidation and who have little experience of power within the relationship.

The history of power within the therapeutic relationship dates to Machiavelli in the 16th century and Hobbes in the 17th century as cited by Proctor, (2017). These two theorists had different views when talking about power. It was not until the twentieth century that Hobbes view of a modernist theory was favoured. Clegg (1989) Hobbes theory of power influenced the basis of thinking around power from a modernist and structural viewpoint.

The modernists view.

This was a new form of expression that was developed in the nineteenth and twentieth centuries. This was the era when counselling and psychotherapy developed. McLeod (2009) p. 37

The Structural Theories.

These theories lie within the context of modernism and take a single point of view, that is concrete and belongs to a person. It is assumed that power is an experience that can be found in the form of economic, social, physical, or psychological capacity. For instance, Day (2010) cites Robert Dahl (1957) as, “A has the power over B to the extent that he can get B to do something that B would not otherwise do”.

These theories have emphasised the controlling, oppressive and negative angle of power. These structural theories have been critiqued as it assumes that the power is always ‘power over’ another.

Lukes, (1974) argued that it is the ability of one person, to get another, to do something that (s)he might not otherwise do. He argues that this power is a result of conflict between actors to determine who wins and who loses.

However, Arendt, (1963) saw power as being related to people joining together and making unbreakable promises. Arendt observed a difference between ‘power’ within relationships and ‘authority’ that is given to an individual because of their role. Hindess, (1996) suggests that this moves power towards a relational process and relying on the consent of others.

Post-Modern Theories

Elias, (1978) suggests that power is not something a person owns, but it is a trait of human relationships. This view is supported by Lukes, (1974). Elias further suggests that power relations are formed in relationship and that it is a result of living together and interdependence. This phenomenon is like a game of tug of war; a trial of strength between two sides pulling against each other. Oxford English Reference Dictionary (1996), p. 1548

Foucault

Foucault suggested that power follows the concepts of Nietzsche in that knowledge and thought, theories and discourses are penetrated by values, Daudi, 1986 as cited by Proctor 2017. This approach formed the basis of Foucault’s work. He sees this power relationship as not responding to others, but on their actions. Thus, it is an action upon an action. Day, (2010).

This view of power suggests that power is inherent in all relationships so it both enables and limits actions, thereby helping individuals to broaden their boundaries, Hayward, (1998)

From this perspective, “Power is everywhere…because it comes from everywhere”. Foucault (1980). Power is involved in all social interactions, because ideas operate behind all language and action. Lukes (1974).

Foucault focused on how power was used in society, such as sexuality, (1976), madness, (1967) or criminality, (1977). He looked at the aims of those involved and the tactics they used to achieve those aims and the counter actions of others to achieve the same objective. In his deconstruction of the power within these institutions, he defines ‘disciplinary power’. He defines this as “comprising a whole set of instruments, techniques, procedures, levels of application, targets”. Foucault, (1977), p.215. He emphasised the ‘struggle’ that occurs between individuals and groups in society as the discord is taken up in response to the behaviours of others. Day, (2010) suggests power operates systematically within a society not from above.

Perspectives of Power in the Psychotherapy Relationship

Whilst searching the literature, I struggled to find any published research. Where references have been uncovered, these have been philosophical or theoretical perspectives on the subject; or individual accounts of personal and professional experience. Sanders, c (2017), Totton,(2009), Amitay, (2017), Lazarus, (2015).

Positions of Power

From the literature, there appears to be four philosophical positions:

Power as a destructive and oppressive force in the psychotherapy relationship;
The psychotherapy Relationship as a process of liberation and empowerment of the client.
Power as a relational, inter-subjective process in the psychotherapy relationship; and
The denial of the existence of power in the psychotherapy relationship.

At the end of the 1980’s, the central thoughts about how the imbalances between the therapist and client can result in oppressive and destructive outcomes for clients. The following debates concentrated on the abuse of sexual boundaries and forms of discrimination and prejudice against minority groups. Bates, (2006); Lago, (2006); Masson, (1989); Smail, 1995. The way the psychotherapeutic relationship exists between the client and therapist means that there is a potential for abusive relationships in the dialogue between the client and therapist. Spinelli, (1994). This reflects a structural position on power, Day, (2010), Proctor, (2017). So, the therapist in these circumstances, has ‘power over’ the client which renders them ‘powerless’ and vulnerable.

Within the literature, Masson (1989), describes power in the therapy room as having destructive elements and that the therapy could be a form of abuse. Another form of destructive power, could be therapist abusing the client by disrespecting the sexual boundaries, Chesler, (1972), Sonne and Pope, (1991) and Gabbard, (1996).

It is suggested that these destructive ways can operate at an unconscious level thus leaving the client vulnerable to past, negative experiences. Herman, (1992) believes that it is important for the therapist to avoid using their ‘power over’ Proctor, (2017), p. 13 the client for their own needs or to direct the client’s life decisions. Day, (2010).

The British Association for Counselling and Psychotherapy, (BACP) state that under their Ethical Framework, the counsellor has a commitment to avoid harm towards the client, (2015).

It is assumed from this point of view that power is dangerous and destructive to those who are powerless. Often power is viewed from an ethical or moral basis, looking at what is right or wrong. In simple terms, power is either ‘good’ or ‘bad’. Furggeri, (1992). This view assumes that is a possession, that is in limited supply and this then forms a structural perspective of power. The client is seen as powerless and the therapist powerful. It could be argued that this is an extreme form of domination and repression. Thus, power is viewed as monolithic, unitary and unidirectional. Procter, (2002)

Psychotherapy as Empowerment for the Client.

An alternative perspective of power is seen as positive with the therapist power being good. Psychotherapy is an empowering process for the client and thus enables the client’s autonomy. This line of argument is seen in humanistic literature, feminist literature. Brown, (1994). The British Psychological Society’s (BPS), 2009 division of counselling psychology, states explicitly that it works to ‘empower the client’.

Carl Rogers was one of the first proponents of this. Rogers suggested that the therapist’s role was to avoid power over the client and also refrain from making decisions for them. Rogers supported the client’s autonomy and how they achieved this. So, the decisions are made by the client for themselves. Rogers, (1978)

Bozarth, (1998) argues that the crux of this theory is that the therapist does not intervene. Natiello, (1990), states, “…. Offers a morality of power as well as a methodology for arriving at that morality”. (p 268). She maintains that the person-centred approach offers the client the opportunity to claim his or her own personal power rather than being reliant on the power of others.

Similarly, Freud theories of psychoanalysis argue for the analysist to use their power of rational authority to free the mind of the client.

Fromm, (1956) argues that over the duration of therapy, the client frees and cures themselves from an attachment to irrational authority. Benjamin,(1995) challenged Freud’s position states “ Already idealised for his knowledge and power – his power to know her – the analyst is now internalised in the relationship of knowledge as power over self, a practice in the domination of self whose meaning Foucault (1980) has made unforgettably problematic” p. 154

Frosh, (1987) states that object relations, like psychoanalysis, sets itself up in the feeing of a person’s psyche. He argues that its objectives are to free the client from fixations created by ‘bad’ experiences and to promote internalisation of the more nurturing possibilities experienced in the relationship with the therapist.

This assumes the client is powerless and vulnerable and the therapist has the power to empower the client. Client’s therefore are viewed as powerless. This could be seen as a structural position where power is either ‘good’ or ‘bad’ and one either has it or not. A moral argument could be where one form of power is ‘right’ and others are ‘wrong’.

A Relationship of Mutuality

The psychotherapeutic relationship is viewed as one of mutuality. Aron, (1996) views this as involving mutual generation of data, mutual regulation of the relationship, mutual recognition of the others autonomy and openness on the part of the therapist as to their client’s impact upon them. Aron argues that power is dynamic that is constantly struggled with in therapy and therefore needs ‘to be continually examined, articulated and worked through’, p151. He suggests that therapists need to question their decisions with regards to ethics as well as questioning their authority and domination in the relationship, referenced in Proctor, (2002) p 133.

Frosh, (1987) believes that the objectives for therapy is to allow the client to explore the power in therapy as it copies and reminds the client of internalised introjects from their formative years. He suggests that an approach which is politicised and recognises the reality of social structures. He argues that part of the difficulties relating to change is that people need to identify, re-experience and re-frame these introjects to help to give them a new meaning in their life. Totton, (2000) argues that it is the therapist’s role to help the client find another genuine and authentic psycho- political position. The relational position, therefore sees the power dynamics as being central in the therapeutic relationship. It is suggested that power is aligned to knowledge and neither the client or the therapist can ‘know’. Thus, it is thought that it is present in all relationships rather than being a possession of the client or therapist. It is therefore unavoidable and potentially both positive and negative. Proctor, (2002) it could be argued that this view, might undermine the role of structural differences in power in society reducing it to an intersubjective process.

Concluding thoughts of the literature

Relational perspectives in psychotherapy have started to think about ‘power’ as dynamic and inevitable. Proctor, (2017). However, despite this recognition of power, the discourse on the power dynamics in psychotherapy has remained at a philosophical level. Much of the literature can be seen as a critique of other psychological therapy or it attempts to show how therapists can misuse the power differential with their clients. The question to be explored and researched further would be how psychotherapists experience the phenomenon of power with their client’s and how it can be worked with in a clinical setting.

2018-4-8-1523186705

World War 2 – The Bloodiest War

After World War 1 ended, you would have thought that the world did not look forward to another world war. World War 1 was a bloody disaster which finally ended. However, Adolf Hitler spoke out against the treaty. It had left Germany in ruins and impacted many countries including the U.S. After World War 1, Germany was forced to sign the Treaty of Versailles. Germany did not agree with many of laws put forth by this treaty. For example, Germany took claim for all damage in the war. This costs about $442 billion dollars as of 2014. Also many new countries were formed including Poland and Finland. The main problem with this treaty was that it destroyed the German economy. After World War 1 many countries had a bad economy. This was the period of the Great Depression. Some of these countries were controlled by fascist governments. These governments were ruled by dictators. These dictators wanted to expand their nation as well. Germany wanted a leader that would stop their economic slump. In 1934, Hitler was dictator of Germany. Hitler allied with a couple fascist controlled countries and rearmed Germany’s army. Also, Japan was trying to expand its nation. They invaded Manchuria in 1931 and China in 1937. In 1938, Hitler took over Austria. Since Europe was tired of World War 1, the League of Nations did nothing to stop Hitler. Hitler became bolder and invaded Czechoslovakia. The main countries in the Allied Powers were Britain, France, Russia, and the U.S. U.S. and Russia joined later in the war. The main Axis Powers were Germany, Italy, and Japan.

Battle of Britain

On July 10 1940, Germany started bombing the Allied country of Great Britain. This was to destroy the British Air Force and prepare for an invasion. The bombing continued for several months. On September 15, 1940, Germany heavily bombed London. However, the British RAF (Royal Air Force) shot down many German planes. Germany realized they couldn’t defeat the RAF. The British won the battle with fewer planes. They had a radar to tell then when the German Luftwaffe (German Air Force) would be attacking and they were fighting over their homeland. The RAF commander was Sir Hugh Dowding. The Luftwaffe commander was Herman Goering.

Did You Know?

The main planes   2.  The main planes that the Germans used   that the British used were  were the Messerschmitt the Hurricane Mk and Bf109 and the Bf110.  Spitfire Mk.

Battle of Atlantic

In World War 2, the allies were trying to supply Great Britain through the Atlantic Ocean. The Axis Powers tried to prevent that. The Battle of the Atlantic was mainly fought in the northern part of the Atlantic Ocean. After America became involved in the war, the battle went to the coast of the U.S. Starting on September 3, 1939 the battle raged on for about 68 months finally ending on May 8, 1945. Germany used U-boats to sink Allied ships with torpedoes. Germans quickly increased the quantity of U-boats and sunk many ships. In order to save their boats, Allied boats travelled in large groups called convoys. These convoys could destroy U-boats. However, as more U-boats were manufactured, it became harder for the convoys to successfully pass the U-boats. In 1943, the Allies successfully created the radar to detect U-boats and broke their codes. They also developed Hedgehogs to attack the U-boats. Soon, America could supply the Allies freely. The Allies won this battle. Over 30,000 sailors died on each side. The Allies lost 3,500 supply ships and 175 warships. Germany lost 783 U-boats.

Pearl Harbor

On December 7, 1941, Japan bombed Pearl Harbor, Hawaii. Japan thought that America would interfere if they took over Asia. In order to destroy the navy Japan attacked the U.S. naval base at Pearl Harbor. U.S. ships, aircraft carriers, and airplane bases were the main targets. On Dec 7, Japanese bombers dropped bombs on Pearl Harbor. The attack was surprising. There were 2 waves in the attack. By the end of the attack, there were many casualties. Fortunately, there were no aircraft carriers in Pearl Harbor at the time of the attack. 2,390 U.S. people were killed. 1,178 U.S. people were wounded. 12 U.S. ships were sunk or beached and 9 were damaged. 164 U.S. airplanes were destroyed and 159 damaged. 64 Japanese people were killed. 5 Japanese ships were sunk or beached. 29 Japanese airplanes were destroyed and 74 were damaged. On December 8, 1941 President Franklin D. Roosevelt called December 7 “a date which will live in infamy” in a speech asking Congress to declare war on Japan. Congress did and America became a part of World War 2.

Did You Know?

All U.S. ships were able to recover except the USS Arizona, Utah, and Oklahoma.

“Yesterday, Dec. 7, 1941 – a date which will live in infamy – the United States of America was suddenly and deliberately attacked by naval and air forces of the Empire of Japan.”

-This is the beginning of President Roosevelt’s speech.-

Battle of Stalingrad

Before this battle, Russia was an Axis Power. However Adolf Hitler decided to attack Stalingrad, Russia. The German Air Force started bombing the city. Soon, German soldiers entered the city. Although the Germans took over most of the city, Soviet soldiers continued fighting. In November, 1942, Soviet Union soldiers surrounded Germans inside Stalingrad. German soldiers had very little ammunition and were trapped. General Paulus of Germany surrendered on February 2, 1943.

D-Day (Invasion of Normandy)

Germany occupied France. The Allies decided to liberate France from Germany. However Germany knew that the Allies would attack but they didn’t know where. Using deception, it seemed that the Allies were going to attack at Pas de Calais. Instead they attacked at the Normandy beaches. Also, the weather wasn’t favorable on the day they were supposed to attack. The Germans were less prepared. At night, paratroopers were dropped to destroy targets. Dummies were dropped as well to confuse the Germans. There were 5 landing sites for the attack. They were nicknamed Gold, Juno, Utah, Omaha, and Sword. To start the attack, airplanes bombed the Normandy beaches. Soon after, warships bombed the beaches. The troops ten arrived on the landing sites. Americans were able to easily take the Utah beach, but Germans had set up defenses of Omaha beach. Although it seemed that the Germans would win, Americans displayed courage and eventually won the beach as well. With determination, the Allies were able to take all 5 landing sites on the Normandy beaches.

Battle of Bulge

Hitler tried to make a final push to win the war. He made a surprise attack on Allied soldiers in the Ardennes forests of Belgium. The Allies, mainly Americans were surprised but they held their ground. Many small groups tried to hold the Germans back for as long as possible. One famous group was led by General Anthony as he yelled “Nuts!” to the German soldiers asking him to surrender. The Allies won the battle, plotting the downfall of Adolf Hitler.

Battle of Berlin

This battle was mostly fought between the the Soviet Union and Germany. Germany’s army was crippled due to the battle of stalingrad. The Soviet Union marched into the city of berlin on April 16. Laying siege to the city, Soviets dropped bombs on the city and eventually surrounded the Germans. They continued to surround the Germans. When Hitler saw that Germany was losing, he committed suicide. On May 2, 1945 German soldiers surrendered to the Allies. On May 7, 1945. Germany surrendered to the Allies. The war in Europe was over.

Battle of Midway

While the Allies were fighting the war in Europe, America was fighting on the Pacific. U.S. had a base at Midway, an island in the Pacific Ocean. Japan decided to attack the base. Japan’s main target was the aircraft carriers. Although Japan wanted the mission to be a secret, Americans were able to figure out that they were attacking. On June 4, 1942 Japan attacked. While Japan’s aircraft took to the air, America’s aircraft carriers surrounded the Japanese fleet. Japan was focused on attacking Midway, but American planes dropped torpedoes to hit the ships. Japanese ships set their guns at a low level to hit a few planes. Many torpedo attack planes were hit and the torpedoes didn’t hit many ships. Then, bombs came falling through the air as American dive bombers destroyed many ships. Their strategy was perfect; 3 out of 4 Japanese aircraft carriers sank. U.S. aircraft carrier “Yorktown” launched many bombers against the last Japanese aircraft carrier “Hiryu”. However, Hiryu was able to launch a number of bombers, and both Yorktown and Hiryu sank. The Japanese casualties were high. They lost 4 aircraft carriers, many other ships, 248 airplanes, and about 3,000 soldiers. The battle ended on June 7, 1942. This was a key victory in the downfall of Japan.

Battle of Guadalcanal

In order to defeat the Japanese, America had to go on the offensive. On August 7, 1942 the U.S. army attacked Florida and Tulagi: 2 islands to the north of Guadalcanal. Soon after the island of Guadalcanal was invaded. The Japanese were surprised and U.S. forces were able to take the base quickly. Japan responded quickly and defeated few American ships and surrounded the U.S. army. Very soon, Japan landed more troops on Guadalcanal to win the island. American planes would destroy Japanese ships coming with supplies and troops, but the ships would arrive at night. In November Japan supplied about 10,000 troops to Guadalcanal. The American soldiers were able to defend the island. Japan continued sending in soldiers but the Americans were able to vanquish them. Japan’s casualties were about 31,000 soldiers and 38 ships. America’s casualties were about 7,100 soldiers and 29 ships.

Battle of Iwo Jima

U.S. decided in order to attack Japan, they needed an island close to Japan so they could launch planes. Iwo Jima would be a perfect place for an air base. On February 19, 1945, 30,000 U.S. soldiers landed at Iwo Jima. They thought that they would capture the island in a week due to bombings. However, Japan had devised a plan. Digging tunnels and hiding in unsuspecting places, the Japanese army attacked the U.S. soldiers. The U.S. army was completely baffled and many soldiers were killed. Another tactic was when U.S. soldiers attacked Japanese soldiers in a bunker. They would continue moving only to realize that more Japanese soldiers had entered the bunker and were attacking from behind. The battle raged on for days. After fierce fighting the island was captured. 18,000 Japanese soldiers were killed. 216 of the 18,000 were captured. 6,800 American soldiers were killed.

Atomic Bomb in Japan

U.S. was planning for an invasion of Japan. Soon, they realized that 500,000 or more U.S. soldiers would die. Therefore, President Harry Truman decided to drop the newly discovered atomic bomb. This bomb created a massive explosion that would cause many deaths. On August 6, 1945 the first atomic bomb named Little Boy was dropped on Hiroshima, Japan. About 135,000 people were killed. Japan didn’t surrender. On August 9, 1945 another atomic bomb named fat man was dropped on Nagasaki, Japan. About 70,000 people were killed. Many civilians were killed in these 2 bombs. 6 days later, Japan’s Emperor Hirohito surrendered to the U.S. The war in the Pacific ended.

Germany had surrendered and so did Japan. World War 2 was finally over. Although nicknamed “The War to End All Wars”, World War also produced some conflicts. What would happen to Germany? With these conflicts in mind we moved on in history.

Niccolo Machiavelli – The Prince (De principatibus) and Ivan IV

Niccolo Machiavelli was born in Florence, on May 3rd, 1469. Machiavelli was alive during the time of the renaissance, the declared rebirth of learning, literature, art and culture – unfortunately, it was also a time of political instability for Italy. In spite of this, Machiavelli agreed to work for the the Signoria, Florence’s governing body at the time, and become the Second Chancellor of the Republic of Florence. Later, Machiavelli acquired a second job as Secretary to the Council of Ten for War. Although Machiavelli’s jobs involved domestic affairs, most of Machiavelli’s time was spent as an international diplomat, analyzing the true intentions and capacities of the various countries involved in the Italian Wars. Several years later, in 1512, Piero Soderini, the First Minister of Florence, was overthrown by the Medici family. In 1513, after being fired, jailed, tortured, and released, Machiavelli wrote and published De principatibus – or, The Prince. Machiavelli wrote The Prince as an advice book for Lorenzo de’ Medici, the ruler of Florence at the time, in the hopes that Lorenzo would offer him his job back (The Prince). Unfortunately, The Prince failed to win the favour of the Medici family, and it isolated him from the Florentine people. The Florentine Republic was reestablished fourteen years later, in 1527, and they deemed Machiavelli to be quite suspicious. Machiavelli died soon after, on June 21 (history.com). In the years that followed, many rulers admired The Prince and followed it’s content absolutely, while others completely ignored Machiavelli’s most famous publication and ruled on their own terms. An example of a particularly Machiavellian ruler was Ivan IV Vasilyevich. Ivan IV closely follows several aspects of Machiavelli’s advice in The Prince, especially the sections that relate to the military and war, methods of dealing with the nobility, and techniques of ruling.

Ivan IV followed Machiavelli’s The Prince very closely when it came to Russia’s military and wars. Just a couple of years after becoming Russia’s first tsar, Ivan IV made drastic changes to Russia’s military and it’s policies. He transformed and expanded the military while changing the chain of command, all based on Machiavelli’s writings. According to britannica.com, “The conditions of military service were improved…and the system of command altered so that commanders were appointed on merit rather than simply by virtue of their noble birth.” (Andreyev). The first purpose of these reforms was to render those serving in the military completely dependent on Ivan IV and the sovereignty. Ivan IV achieved this by compensating the service gentry with landed estates – an income-generating property that requires no work from the owner. This kept the soldiers forever indebted to Ivan IV, because they only made money as long as they had the property, leaving them unable to do anything but serve Ivan IV. Additionally, these houses were not very profitable, and they often left the owners in serious financial debt. The tsar allowed loans through gentry banks, leaving the soldiers again indebted to the sovereignty with their loans. The second purpose of these reforms was to limit the power of the hereditary aristocracy. By designating commanders based on their worthiness, rather than basing it on noble birth, Ivan IV took the power away from the nobles and dramatically altered the distribution of power in Russia’s controlling powers and their influence over the military. (Ivan IV’s methods of dealing with the nobles will be expanded on in the next paragraph.) Ivan IV’s edits to Russia’s military didn’t end with the refinement of the stipulations of service, and the changes to how commanders were nominated. dailyhistory.org states that Ivan IV also “introduce[d] western military technology into Russia and this was one of his greatest innovations” (Whedland). Ivan IV’s adoption of western technology allowed him to expand and strengthen his military. In turn, that tech allowed Ivan IV to advance in the several wars he engaged in throughout his time as the tsar of Russia. All of Ivan IV’s tactics regarding the transformation and expansion of his military relate back to a quote from The Prince: “…never remain idle in peaceful times, but industriously make good use of them, so that when fortune changes…[one will be] prepared to resist…blows and prevail in adversity” (Machiavelli 47). Machiavelli believed that rulers would absolutely be challenged during their lifetime, and in order to combat that, during times of nonviolence militarial strength and size should be increased. Otherwise, rulers would inevitably lose their power. Machiavelli’s writings and Ivan IV’s actions match up, as one of Ivan IV’s first actions as the tsar was editing the policies of his military, while also increasing Russia’s capacity for war. Although Ivan IV’s attempted military conquests proved to be fruitless and largely unsuccessful, Ivan IV’s did have a generation of soldiers that were loyal to him and the crown. Furthermore, Ivan IV actions and Machiavelli’s advice concerning the single-minded focus on war also aligned. Nikola Andreyev expresses that “Russia was at war for the greater part of Ivan’s reign” (britannica.com). Ivan IV’s main focus during his time as the first tsar of Russia was conquering other countries. During Ivan IV’s early time as the ruler, he launched several attacks against Kazan, a Turkic state in Russia, that ultimately failed. A couple of years later, Ivan IV, after extensive preparations, successfully defeated the town of Kazan, before also annexing Astrakhan four years later.

Machiavelli & Ivan IV: college essay help

Born in Florence, on May 3rd, 1469, Machiavelli was alive during the time of the Renaissance, the declared rebirth of learning, literature, art, and culture. Unfortunately, it was also a time of major political instability for Italy. In spite of this, Machiavelli agreed to work for Florence’s governing body at the time, the Signoria, and become the Second Chancellor of the Republic of Florence. Later, Machiavelli acquired a second job as Secretary to the Council of Ten for War. Although Machiavelli’s jobs involved domestic affairs, most of Machiavelli’s time was spent as an international diplomat, analyzing the true intentions and capacities of the various countries involved in the Italian Wars. Several years later, in 1512, Piero Soderini, the First Minister of Florence, was overthrown by the Medici family. In 1513, after being fired, jailed, tortured, and released, Machiavelli wrote and released De principatibus – translated as The Prince. Machiavelli wanted to be reemployed by the Florentine government, so he wrote The Prince as advice for Florence’s ruler at the time, Lorenzo de’ Medici, in the hopes that Lorenzo would offer him his job back upon reading the pamphlet (Lotherington). Unfortunately, The Prince failed to win the favour of the Medici family, and it isolated him from the Florentine people. Fourteen years later, on May 16, 1527, the Florentine Republic was reestablished, and Machiavelli died soon after, on June 21 (history.com). However, in 1533, The Prince was officially published. In the years that followed, many rulers admired The Prince and followed it’s content absolutely, while others completely ignored Machiavelli’s most famous publication and ruled on their own terms. An example of a particularly Machiavellian ruler was Ivan IV Vasilyevich. Ivan IV ruled Russia from 1547 to 1584, and he closely follows several aspects of Machiavelli’s writings in The Prince, especially the sections that relate to methods of dealing with the nobility, techniques of dealing with people, and tactics in regards to military and war.

In The Prince, Machiavelli wrote about how the nobility should be treated, and Ivan IV’s actions in regards to Russia’s nobility can be traced back to these writings. Machiavelli explains that “…from hostile nobles…[a ruler] has to fear not only desertion but their active opposition” (Machiavelli 32). Machiavelli knew that if the nobility were the enemies of the ruling class, then their struggle for power would inevitably lead to one of two things: abandonment or resistance from the nobles. Ivan recognized that Russia’s nobles were not agreeable towards him, so in order to gain their loyalty, he forced it by reducing their authority, thereby taking away their options and requiring their obedience. Firstly, Ivan limited the nobles power through his edits to the terms of military service and the process of choosing commanders. As explained by britannica.com: “The conditions of military service were improved…and the system of command altered so that commanders were appointed on merit rather than simply by virtue of their noble birth.” (Andreyev). The purpose of this was to limit the power of the nobility. Ivan IV accomplished this by appointing worthy soldiers as commanders of the military, instead of the nobles. Consequently, the nobles had less power because they had a lower status in the military. Secondly, Ivan IV decreased the noble’s power through landed estates. Landed estates are income-generated properties that requires no work from the owner. Instead of giving the nobles estates all in the same area, Ivan IV split the land across Russia, dividing up their relative strength and ensuring that they wouldn’t be able to challenge him. As stated by eiu.edu, in a venture to further reduce the noble’s influence, Ivan IV created “a new Royal Law Code, the судебник…the new Code circumscribed many of the powers of the [nobles]” (Carswell). Essentially, Ivan’s new laws decreased the dominance of Russia’s nobility, while also penalizing those guilty of abuse of power. Ivan hoped that by taking away the noble’s influence Russia, they would not be able to leave or fight him, and if they attempted to, they would have been found of breaking of the law. However, Ivan must have felt that these measures did not achieve their purpose, as later in his time as the tsar, Ivan IV reduced the strength of Russia’s nobles in a more aggressive way – an oprichnina. An oprichnina is an area of land that is controlled separately from the rest of the country by the sovereignty. As reported by britannica.com, Ivan IV’s plan behind the oprichnina was to “destroy the economic strength and political power of the… high nobility” (Andreyev). Ivan IV created a 1,000 – 6,000 men squad of bodyguards, called the oprichniki, to supervise Ivan IV’s carefully selected oprichninas. The oprichniki presided over the oprichninas with impunity, and walked over everyone, except those in Ivan’s immediate circle, including the nobility, and innocent citizens. The oprichninas were eventually shut down after seven years, because they prevented the oprichniki from defending against attackers, leaving Russia vulnerable (britannica.com). Ivan’s second attempt to weaken the nobility was not entirely effective – his only accomplishments were furthering the instability of his country and murdering an obscene amount of innocent civilians along with a couple of nobles. Regarding Ivan’s methods of power reduction for the nobles, his drastic change in tactics – from altering the written legislation to military force – is best explained by this quote from The Prince: “…from [the nobles]…the prince must guard himself and look upon them as secret enemies” (Machiavelli 32). Machiavelli knew that even if a ruler was not engaged in a feud with the nobility, he must view them as his enemies at all times, in order to protect themselves and the ruling class. Otherwise, Machiavelli’s previous quote about the nobles becomes relevant, and the ruler has to worry about the nobles leaving and/or challenging them. Ivan IV followed Machiavelli’s direction in The Prince unquestioningly, stripping away the noble’s power in the military, thereby reducing the noble’s influence over Russia’s military, a critical asset if their betrayal of Ivan were to succeed. Ivan IV still wasn’t satisfied, so, several years later, he imprisoned many of the nobles in the aforementioned oprichninas, and let the oprichniki slaughter them, along with anyone else they desired. During this time, Ivan let the nobles he trusted run the country while he retreated into the protection of the oprichniki. Even though Ivan’s efforts to diminish the power of the nobility was only partially successful, following Machiavelli’s advice still allowed him to undercut the nobles by reducing their power several ways, and murder a vast majority of them.

Early in his reign, Ivan religiously abided by Machiavelli’s advice in The Prince that refers to techniques that rulers should adopt when presiding over their citizens, but unfortunately failed to do so in his later years as Russia’s first tsar.  Ivan IV decided to attempt and appease the people in order to prevent a rebellion. To placate the Russian citizens, and reduce the chances of a future revolution, Ivan IV organized a town meeting that, as specified by eiu.edu, gave his subjects “the opportunity to voice complaints and present opinions concerning matters of the kingdom” (Carswell). Ivan knew that his kingdom wasn’t perfect. Ivan IV also knew that an attempted uprising might occur if he did not endeavor to oblige the requests of the people with respects to the issues that needed to be fixed. So, Ivan IV organized a meeting to listen to the Russian individuals on matters respecting the sovereignty in all of its aspects. Unfortunately, nobody attended the meeting, but in spite of that Ivan IV still pledged to do better as Russia’s ruler (eiu.edu). Ivan’s attempt to preemptively conciliate the Russian public was his way of following Machiavelli’s advice about ensuring the happiness of one’s citizens: “The prince must…avoid those things which will make him hated or despised” (Machiavelli 58). Machiavelli knew that a ruler would never be able to last long if the citizens of their country disliked them. In order to prevent that, Machiavelli advised that a ruler should not do anything that would risk losing the approval of their natives, thereby securing the ruler’s position for years to come. Ivan IV’s way of espousing Machiavelli’s input was to guarantee his popularity by fixing the problems brought up in the meeting. Again, even though nobody attended, Ivan upheld his promise to improve himself for several years, and Russia enjoyed a peaceful and progressive time in the early stages of Ivan’s reign. Alas, Ivan proved several years later that he was not a man of his word, as the oprichninas and the oprichniki showed. Still, that wasn’t where Ivan’s cruelty ended. After the oprichninas failed, and the oprichniki had to be disbanded, Ivan IV became paranoid that the city of Novgorod were planning to upstage him. Described in dailyhistory.org, Ivan’s paranoia drove him to order the city to be “attacked…and had it sacked in an orgy of bloodshed and brutality that lasted weeks” (Whedland). Previously, a Russian town had removed themselves from Russian rule, and allowed themselves to be controlled by the Polish (historycollection.com). It is rumored that those events, along with Ivan’s insatiable paranoia, drove him to completely ruin the town of Novgorod. Along with the majority of the previously decommissioned oprichniki, Ivan IV stormed the city in a month-long siege of murder, rape, torture, and theft. It was this event that earned Ivan the nickname of “Ivan the Terrible”. Although Ivan IV’s actions were inexcusable, several of his techniques were particularly Machiavellian in their nature. In The Prince, Machiavelli wrote about certain situations where cruelty should be permitted: “A prince…must not mind incurring the charge of cruelty for the purpose of keeping his subjects united and faithful” (Machiavelli 52). Machiavelli’s basic belief behind this excerpt is that rulers should consider brutality acceptable if their citizens needed a reminder to behave, or if the country required reunification. Although Ivan IV’s paranoia about a permitted Polish invasion was unsubstantiated, what he believed excused his actions from a Machiavellian perspective, because he thought that the behaviour of the Novgorod citizens necessitated a correctional intervention. In The Prince, Machiavelli wrote extensively about practices rulers should adopt when overseeing the citizens of their country, and Ivan appropriated several of them. In the beginning of his reign, Ivan IV allowed the Russian citizens to voice their concerns, in order to fix them and maintain the peace. Later in his time as the tsar, Ivan disregarded his reluctance to cruelty in order to successfully put suspected rebels in their place. Both of Ivan IV’s actions show that Machiavelli’s recommended ways of dealing with the people of the country are extremely useful.

Ivan IV followed Machiavelli’s The Prince very closely when it came to the military and war. Just a couple of years after becoming Russia’s first tsar, Ivan IV made drastic changes to Russia’s military and it’s policies. He transformed and expanded the military while changing the chain of command. Additionally, Ivan sent Russia into several lengthy wars for the majority of his time as Russia’s tsar, all based on Machiavelli’s writings. A Machiavellian approach Ivan IV utilized regarded certain actions a ruler should take when engaged in conflict: “…[if] armies are to be used…the prince must go in person” (Machiavelli 40). Machiavelli advised that if conflicts were to be engaged in, that their ruler should be present alongside them, because he knew that a country’s military would be considerably more successful in their engagements. So, when sending Russia’s armed forces into Novgorod, Ivan decided to join the attack alongside his troops and he secured a victory against the citizens of Novgorod. But before this occurred, according to britannica.com, “The conditions of military service were improved…and the system of command altered so that commanders were appointed on merit rather than simply by virtue of their noble birth.” (Andreyev). The first purpose of these reforms was to render those serving in the military completely dependent on Ivan IV and the sovereignty. Ivan IV achieved this by compensating the service gentry with landed estates. This kept the soldiers forever indebted to Ivan, because they only made money as long as they had the property, leaving them unable to do anything but serve Ivan. Additionally, these houses did not provide enough income to properly support the owners. This left the majority of Russia’s soldiers in serious debt, as they had no other way of purchasing what they needed. Cleverly, the tsar allowed the repayment of debt with loans from special gentry banks owned by the crown. This left the soldiers again indebted to the sovereignty with their loans. The second purpose of these reforms was to limit the power of the hereditary aristocracy. By designating commanders based on their worthiness, rather than basing it on noble birth, Ivan IV took the power away from the nobles and dramatically altered the distribution of power in Russia’s controlling nobility and their influence over the military. Ivan’s edits to Russia’s military didn’t end with the refinement of the stipulations of service, and the changes to how commanders were nominated. According to dailyhistory.org, Ivan also “introduce[d] western military technology into Russia and this was one of his greatest innovations” (Whedland). Ivan IV’s adoption of western technology allowed him to expand and strengthen his military. In turn, that technology allowed Ivan IV to advance in the several wars he engaged in throughout his time as the tsar of Russia. All of Ivan IV’s tactics regarding the transformation and expansion of his military relate back to a quote from The Prince, “…never remain idle in peaceful times, but industriously make good use of them, so that when fortune changes…[one will be] prepared to resist…blows and to prevail in adversity” (Machiavelli 47). Machiavelli believed that rulers would absolutely be challenged during their lifetime, and to combat that, during times of nonviolence, militarial strength and size should be increased. Machiavelli knew that if rulers did not capitalize on peaceful times and increase the capacity of their military, then they would inevitably lose their power. Machiavelli’s writings and Ivan IV’s actions match up, as one of Ivan IV’s first actions as the tsar was editing the policies of his military, while also increasing Russia’s capacity for war. Although Ivan’s attempted military conquests proved to be fruitless and largely unsuccessful, Ivan’s did have a generation of soldiers that were loyal to him and the crown. Furthermore, Ivan IV’s actions and Machiavelli’s advice concerning the single-minded focus on war also aligned. As told by britannica.com, “Russia was at war for the greater part of Ivan’s reign” (Andreyev). Ivan IV’s main focus during his time as the first tsar of Russia was conquering other countries. During Ivan’s early time as the ruler he launched several unsuccessful attacks against Kazan, a Turkic state in Russia. A couple of years later, after thoroughly getting ready, Ivan IV successfully defeated the town of Kazan in 1552, before also annexing Astrakhan four years later. The Volga River is part of a trade route to the Caspian Sea, and Ivan’s takeover of Kazan and Astrakhan allowed for safety on that section of the journey. But, Ivan still needed to push his way to the sea after the end of the Volga River. Therefore, two years later, Ivan IV waged war on Livonia (current-day Latvia and Estonia) in an attempt to institute Russian rule. Russia was triumphant in their battle against Livonia, but Poland fought back several years later, pushing deep into Russian territory, while Sweden reclaimed parts of Livonia. An intervention was staged by Pope Gregory XIII at the request of Ivan IV, and Poland and Russia agreed on a treaty. Russia lost all of the land they claimed in Livonia, and a few years later, Russia was forced to give up their land on their Gulf of Finland in an armistice with Sweden. Essentially, Ivan IV’s relentless pursuit of war that consumed the duration of his reign proved to be entirely pointless. However, Ivan IV’s persistence in warring was adopted from a tactic written about in The Prince: “A Prince should therefore have no other aim or thought…but war and its organization and discipline…” (Machiavelli 46). Fundamentally, this Machiavellian principle states that a ruler should always be preparing for, or engaging in, war. Although Ivan’s wars were ultimately ineffective in achieving anything, he followed Machiavelli’s principle exactly. Ivan IV was always engaged in conflict, and had little time for anything else. Although the majority of Ivan’s military conquests ultimately proved to be profitless, following Machiavelli’s advice in The Prince brought much success for Ivan IV, proving not only that Machiavelli’s writings are invaluable, but that Ivan IV concurred vehemently with Machiavelli.

Ivan IV Vasilyevich followed Machiavelli’s writings inherently, particularly those regarding strategies of dealing with the nobility, techniques of ruling, and military and war tactics. Machiavelli recommended that rulers ensure a peaceful but not trusting relationship with the nobles, in order to prevent abandonment or conflict, while also guarding themselves from them. Ivan adhered to Machiavelli’s advice, nonviolently taking away the noble’s power in order to continue a peaceable relationship, thereby preventing damage if a betrayal were to occur, before violently ripping their influence – and their lives – away. Machiavelli also suggested that a ruler should not participate in activities that would look unfavourable to the public, while also advocating for the use of cruelty if it reunited the citizenry, tactics that were both appropriated by Ivan IV. Finally, Machiavelli advised that a ruler should always be getting ready for, or entering into, war, preparing in times of peace. Ivan IV utilized this Machiavellian approach to the military and war by changing up several aspects of Russia’s military policy while fighting in wars the majority of his reign. Although Ivan was not the most successful Machiavellian ruler, he still managed to create a system that indebted Russia’s citizens and nobles that served in the military to the ruling class, strengthening their loyalty to Russia. Ivan IV also increased his power at the expense of the nobles, weakening the nobility and reducing the chance of a betrayal. Lastly, Ivan IV aimed to please the Russian people by fixing their issues with the ruling class, while gaining lots of land in several military expeditions. Unfortunately, both of these accomplishments became undone in Ivan’s later ventures. Although Ivan failed to truly understand much of Machiavelli’s advice, it is important that we understand the core message of several of his quotes. Some of the most crucial lessons that can be learned is leaders should be alongside their people when sending them into difficult situations, unlawful things should not be participated in, and that peaceful times should be used to prepare for future adversity. Sadly, Machiavelli never knew how many rulers would eventually follow his advice in The Prince, but fortunately, he and his words live on through the rulers that utilize his writings.

Digital piracy – China and the US

In the United States, actions of an individual are often perceived to fit into two distinct categories: right or wrong. This line of crystal-clear judgement is ingrained into the minds of most citizen, as they are raised to believe that the laws of the United States are what definitively separate the heroes and villains of society. Things are unfortunately not always so clear, and with the emergence of new technologies an unusual wave of crime proved to add more complexity than ever–and on a worldwide scale. This infraction was coined as “digital piracy” and is considered the illegal act of duplicating, copying, or sharing a digital work without the creator’s permission (Ingram). The U.S. has attempted to prevent this theft through a multitude of laws and regulations, but the issue has only worsened internationally as China is the greatest producer of pirated materials in the world, with astounding rates reaching upwards of 90% (Evans). U.S. citizens often conclude from this information that China is overflowing with criminals, but that is far from the truth. Many studies have shown that attitude is the leading factor to shape behavior, and this knowledge is certainly applicable to behaviors of piracy. Because the beliefs of an individual are what shape their actions, understanding the different histories and values of China versus the U.S. could lead to a greater understanding of why this crime occurs. If these cultural differences are acknowledged, rather than ignored, it could help to diminish digital piracy’s negative impact on both countries.

In the first place, the two countries have followed completely different paths to reach their current intellectual property laws. In the United States, the conversation around copyrighted material began in the early 1700’s, as many American authors found their works stolen, and could not be reimbursed for the theft. In 1709, Daniel Defoe, author of Robinson Crusoe, famously said:

Why have we laws against house-breakers, highway robbers, pick-pockets, ravishers of women, and all kinds of open violence [and yet no protection for the author]? When in this case a man has his goods stolen, his pocket pick’d, his estate ruined, his prospect of advantage ravish’d from him, after infinite labour, study, and expense. (Bently)

This statement very well encapsulates the feelings of many American citizens at this time in history. Intellectual property was easily stolen from the creator, and there were no laws in place to prevent the act. However, this all changed in May of 1790. The United States’ senate passed the Copyright Act of 1790, which gave protection to authors and inventors for 14 years, with a renewable 14 year term if the author was still alive. This law ensured that the original creator could benefit from the capitalization of their product, and be the sole seller of their commodity in the market. The act officially opened up the discussion over copyright laws, and continued to be altered as the surrounding country evolved (Copyright Act of 1790).

The most extensive revision to this law occurred in 1976, as the United States struggled to keep up with the demands of pressing technological advancement. Just seven years before this revision, the computer network which eventually became the internet was created, and only consisted of four nodes. It roughly doubled in size every 14 months since its invention, and continues to do so into the 21st century, as there are now over one billion nodes. In the 1970’s this level of growth was incredibly worrying, as the internet was making intellectual property theft more accessible than ever before (Bridy). Because of these radical changes in technology, Congress updated the law to directly address the issues of the new, digital age. The law addressed digital piracy, and states that theft of copyrighted material is illegal in both the physical or digital format. The law asserts that piracy is a serious threat to the economy and its citizens, and says that effective criminal enforcement of these intellectual property laws is among the highest priorities of the Attorney General. This shows how highly intellectual property is regarded by U.S. citizens, as intellectual property ownership is an integral feature of capitalism. As Appendix F, section 503 of the copyright law asserts this belief, as it says digital piracy reduces jobs, exports, and decreases the overall competitiveness of American industry. Intellectual property is revered in the United States’ culture, as individuals value their personal work highly, and would like to reap the profits for themselves (Copyright Law of the United States).

Contrary to U.S. laws, China followed a much longer, slower journey to establish a sensible copyright system. The United States is a relatively young country when compared to the elder China, therefore the deliberation over copyright laws dates back much further in the past for China. Most scholars believe the notion of copyright emerged within the area at the same time as the invention of printing, which dates back to the Tang Dynasty (A.D. 618-906). This concept was not put in place for the citizens, but rather for the dynasty. They wanted a way to regulate what was being printed to the public, and this helped them control the publication of unwanted materials. The focus throughout much of China’s history revolved around maintaining this authority of the state, and in the 1700’s, discomfort over this notion began to increase. The Chinese government still controlled what was being printed to the public, but simultaneously the United States and England were spreading the idea of intellectual property to the world, even enacting laws before the 19th century began. At the same point in history, the discussion over intellectual property was intensifying in both the United States and China, but in opposing ways (Alford).

Subsequently, when the People’s Republic of China was founded in 1949, the notion of establishing legitimate copyright laws was under more consideration than ever before. The first form of legal copyright in China was enacted on August 11, 1950, and was titled the Interim Regulations Concerning the Grant of Rights Over Inventions and Patent Rights. This was established to provide inventors with a way to legally patent their products, and prevented the theft of their individual ideas. Though it seemed like a great idea, it turns out that this law was hardly ever put to use. In fact, between 1950-1963 only four patents and six certificates were ever granted to inventors. This exemplifies how Chinese citizens were discouraged from wanting to apply for a patent, as most property was considered to be shared amongst all (Ganea and Pattloch).

In addition, the first law to acknowledge the copyright of artistic works was enacted by China’s Congress in 1990. Before this point, any of these works (such as films or literature) were not considered to be the creator’s intellectual property. This allowed citizens to apply for full ownership of this property, and granted them the ability to sue infringers. However, this law was still followed quite poorly by both chinese citizens and legislators. Western lawyers often criticize the Chinese legislation of the time, as justice was hardly served to the people who broke this law. Judges were paid low wages, and were often susceptible to bribes by the defendant. It was also difficult to communicate the importance of intellectual property to Chinese citizens.. Because the communist regime had a set wage for the citizens, having ownership of an intellectual product did not seem to provide much benefit to the creator. Furthermore, Chinese citizens tend to have a genuine appreciation of collectivism, and were often encouraged to share their ideas with others. Though these were all issues in response to the new law, it kickstarted China’s quest towards a better understanding of intellectual property (Kachuriak).

The United States was continually irritated by China’s negligence of these laws, as it led to multiple accounts of theft of internationally copyrighted material. This came to fruition in 1994, when the first official copyright case involving the U.S. occurred in China. There were multiple children’s publishers in China selling books which involved well-known Disney characters such as Mickey Mouse and Goofy, all without permission from the copyright holder. The Walt Disney Company filed a suit against the publishers, and though they won, the payment for damages was shockingly low at only $27,000. Situations like this provoked the U.S. to force China into more severe copyright laws, allowing China very little leeway for a compromise (Zhang).

After this incident, the United States took a severe approach by threatening to stop all trade with China if they did not fix the issue. Finally, in 1995, the two countries negotiated new copyright laws for China. Though the new laws did curb copyright infringement to an extent, in recent years the crime has become seemingly unstoppable. China is a huge world force with well over a billion citizens, and their current levels of piracy reflect the impact they have on the world. The piracy of U.S. materials in China has resulted in annual losses of approximately $827 million, and U.S. companies such as Disney and Microsoft have attempted to enter its market, but found it plagued with copyright infringers. Currently, the leading source of copyright infringement is the theft of software, as the leading advocate for the global software industry (BSA) estimates that up to 94% of the software used in China is pirated. An example of this piracy at its worst occurred when it was found that China’s Shenzhen University had made 650,000 holograms which were virtually identical to those created by Microsoft, resulting in a loss of $30 million to the large company. These occurrences in China are quite frightful to U.S. companies, as it discourages them from selling, licensing, or transferring any of their goods to the country (Kachuriak).

Nevertheless, the United States is certainly not perfect either. The Motion Picture Association of America estimates that up to 600,000 films are being downloaded onto the internet by various Americans every day. Though this may be immediately construed as a negative for the film industry, a study conducted by Carnegie Mellon University in 2016 attempted to test whether there are any positive outcomes of film piracy. The study hypothesized that there are two probable effects of film piracy. The first hypothesis states that movie piracy steals content, and therefore prevents the sale of the film. The second hypothesis, however, is one which is often overlooked. It asserts that spreading content illegally pre-release may actually spread enough word of mouth to increase sales once the movie is put in theaters. The study searched for answers by analyzing data from all wide-release U.S. movies from the years 2006 to 2008, and the numbers proved that in most cases box-office revenue would increase 15% if digital piracy was taken out of the equation. This equates to a loss of around $1.3 billion of revenue. Despite this fact, it has been speculated that some companies purposefully release their films early, based on the principles of the study’s second hypothesis. A miniscule amount (3%) of companies would benefit from doing so, but this study proves how the majority of movie-makers are hurt immensely by the effects of piracy (Ma).

With all of the downsides to pirating, it is difficult to comprehend why it has become such a prevalent issue in society. Many studies have been conducted to attempt to answer this question, and it has been almost unanimously concluded that attitude is the most important factor in committing an illegal act. This would make sense, as attitude is considered the most important attribute of social psychology, and in 29 out of 30 studies it is what leads to piracy. In a study conducted in 2006 by Sulaiman Al-Rafee and Timothy Paul Cronan, it was discovered that there were a multitude of factors which shape pirating behavior. Of the variables tested, the most significant factors were found to be age, deceitfulness, and high levels of positivity.

To elaborate on these factors, age was important because it seemed that older subjects were more likely to have a distasteful view toward piracy. This could be attributed to the fact that elderly subjects did not grow up so accustomed to the digital world, and therefore do not view the act as a natural occurrence. The study then uses the psychology term “machiavellianism” to describe a deceitful person, and it was found that more of this trait often causes a person to pirate. People with this trait commonly exploit others to reach personal goals, and this often occurs when pirated material is resold for a profit to the copyright infringer. The last factor is slightly more surprising, but showed that positive beliefs did connect quite strongly to behaviors of piracy. If a person thinks intellectual property theft will have a positive impact on their life, then they may be more likely to commit the crime without considering the consequences. These factors show how piracy is not just the act of criminals, but something which everyday citizens are capable of, and can be traced back to the mindset they have acquired throughout their life (Al-Rafee and Cronan).

Granted that this is true, this study can help explain why piracy is also an issue in China. The mindset shared by Chinese citizens revolves around collectivism, and many scholars believe this is the leading reason for the country’s outstandingly high piracy rates. Scholars such as Wendy Wan have studied the attitude of Chinese consumers, and argue that Chinese citizens often accept piracy because of their high levels of face consciousness. “Face” refers to a person’s physical appearance, and if they have “face consciousness” this means they will tend to put more emphasis on their physical qualities, rather than their inner attributes. This concept is known by many to be one of China’s most important cultural values. In Wan’s 2009 study, she tested if high levels of face consciousness are what lead to piracy, and after interviewing 300 Chinese citizens, this was found to be true. Wan says in her discussion:

One side of the coin of face consciousness is extravagance, while the other side of the coin is a low moral standard resulting from materialism. Pirated CDs are more acceptable to those with face consciousness because they are more materialistic and have a lower moral threshold. Moreover, face consciousness leads to risk aversion which in turn leads to greater compliance with traditional practices. The Chinese traditional practice of learning is by copying. As a result, pirated CDs are even legitimate because they are consistent with that traditional practice. (Wan)

This is an important discovery, as it shows how China’s culture has had a significant impact on why these citizens pirate material. They have been raised with the cultural practices of sharing and copying, and this leads them to believe the act is acceptable. Similar to the United States, copyright infringers in China are often ordinary citizens, and are simply impacted by the beliefs they were raised with.

Studies such as these consistently prove to be important in order to understand why piracy still regularly occurs around the world. It is based on the attitude of individuals, along with the long-cultivated beliefs which are established through cultural practices. In the United States, individuality is placed at the utmost importance, and when mixed with capitalism, it yields a competitive atmosphere around intellectual property. However, in China, collectivism generates a sense of unity among citizens, and these ample amounts of equity cause the citizens to want to share intellectual creations with one another. Collectivism also leads to a lessened amount of individual responsibility, as a crime committed is often viewed to be shared by many, not just the act of one person. As a Chinese proverb says of this, “The law cannot apply if everybody breaks it.” It is imperative to acknowledge these differences between Chinese and American values when attempting to curb piracy, as this can finally lead to the root of the problem: attitude (Lu).

Up until now, there have been two popular methods utilized to stop piracy. The first is prevention, and this method attempts to make piracy so difficult that people do not find the act any more useful than legally purchasing the material. The second is a deterrent, and this uses the threat of undesirable legal consequences to stop infringers from wanting to pirate material in the first place. Both of these approaches have been employed by the United States and Chinese government, yet to no avail. If the most recent data on attitude is taken into account, however, then there are new suggestions which could possibly lead to the desired results (Al-Rafee and Cronan). For example, research has shown that if consumer beliefs toward piracy are adjusted to view it more negatively, then this would have the most substantial impact on their actions. If piracy is considered a shameful act, and advertised as such, perhaps this would affect consumers more than just telling them it is illegal. Theories like this show the importance of utilizing all available research, and continuing to search for more methods which line up with the facts are what may actually result in change (Husted).

All things considered, it is easy to judge China’s copyright issues from thousands of miles away, but perhaps they have more in common with the U.S. than initially expected. China and the U.S. have taken very different paths to reach their current intellectual property rights, and both countries still have a ways to go. Though the countries have differing values, this diversity should be embraced instead of judged. The collectivist and individualist ideals of these countries have vast differences, but their basis lies on attitude, and this should be the primary focus when attempting to stop digital piracy. This crime is built on complex motives, and committed by average citizens, plus there is no certain solution to the problem. However, current research shows that the United States and China must look past their differences in order to fix it, as their issues lie in the attitude of their citizens, not their differences of legislation.

The Prince Book Analysis – control and power: essay help site:edu

When looking for control how do you achieve it? When done with that, how do you maintain your power? There are many ways to go about getting power. There are even many more ways to keep your power because of the fact that there are many different things that can go wrong. All of those questions can be answered from reading the book “The Prince” by Nicolo Machiavelli. In the book, Machiavelli talks about how to achieve and maintain power. This is done through first explaining what the different ways to gain power are, he then talks about the military standpoint of being a ruler, finally he gives the example of Italy and their political troubles.

How do you gain power in the first place? There are four different kinds of principalities, a hereditary principality, mixed principality, new principality, and ecclesiastical principality. The ecclesiastical principality is the least common because it is ruled by the Catholic Church. Then there is the mixed principality, this happens when a ruler annexes another state or states into their already existing one. New principalities and hereditary principalities are the most common ones. In a hereditary principality it is as it sounds, hereditary, the new ruler has inherited it from his father or another family member.

New principalities can be acquired in a few different ways. First, they can be acquired by a ruler’s own power, that is mainly through war. Second, through criminal acts or extreme cruelty. In regards to Agathocles, the Sicilian who later became King of Syracuse

“…having devoted himself to the military profession, he rose through its ranks to be Praetor of Syracuse. Being established in that position, and having deliberately resolved to make himself prince and to seize by violence, without obligation to others, that which had been conceded to him by assent, he came to an understanding for this purpose with Amilcar, the Carthaginian, who, with his army, was fighting in Sicily. One morning he assembled the people and the senate of Syracuse, as if he had to discuss with them things relating to the Republic, and at a given signal the soldiers killed all the senators and the richest of the people; these dead, he seized and held the princedom of that city without any civil commotion. And although he was twice routed by the Carthaginians, and ultimately besieged, yet not only was he able to defend his city, but leaving part of his men for its defence, with the others he attacked Africa, and in a short time raised the siege of Syracuse. The Carthaginians, reduced to extreme necessity, were compelled to come to terms with Agathocles, and, leaving Sicily to him, had to be content with the possession of Africa.” (Pg 18).

Agathocles rose to power through cruelty and criminal acts.  After rise of power through criminal acts or one’s own power there is a rise by the power of others. Finally, there is a rise in power by the will of the people which is a civic principality.

After your rise to power you will have many problems, one main thing will be war and the threats of war. Machiavelli states on page twenty-four, “a prince who has a strong city, and had not made himself odious, will not be attacked.” If you have a strong defense then the enemy will think twice before they attack and more often than not they won’t attack at all. With the offensive side of war Machiavelli never really states anything about troops for attacking but says that a ruler should be well versed in the art of war. “A prince ought to have no other aim or thought, nor select anything else for his study, than war and its rules and discipline.”

Machiavelli also believes to never call in reinforcements for a war other then the rulers own men as stated on page twenty-eight, “experience has shown princes and republics, single-handed, making the greatest progress, and mercenaries doing nothing except damage.” He also states his dislike towards auxiliary and mixed/hired soldiers on page thirty, “Auxiliaries, which are the other useless arm.” He cites Hiero, the Syracusan because the mercenaries were of no use, he had them cut to pieces after deciding he couldn’t keep them nor let them go. Afterwards he went to war with his own men, not foreign men and aliens.

Now, concerning Italy and their political situation and troubles, Machiavelli mentions Italian rulers and how they have lost their power to rule. Machiavelli believes that rulers should always be doing something that is productive for their country. As a ruler you and your country should always be self sufficient and not rely on others.

A ruler should not rely on fortune either because fortune controls parts of human affairs. Free will however will control the rest, thus leaving the prince free to act and with no control. The prince will never be able to control everything without becoming a communist country. It is better to control part of the country than to control none of it at all when relying on fortune.

How will you gain control and maintain your power, there are many different ways to gain power, then you have to keep that power through the force of military, and remember not to follow the mistakes of Italy. “Everyone sees what you appear to be, few really really know what are,” (Pg. 40). How will people see you?

Marxist Feminism and Marxism on state (draft): online essay help

Introduction

State is perceived as an abstraction (Edelman, 1964: 1). There is no exact concept about definition of state. However, this conceptual abstraction of state merely can be interrogated using different theoretical approaches in order to explain the role of the state (Hay, Lister, 2006: 4). Dunleavy and O’Leary (1987) divided the definition of state into two forms which are organizational definition and functional definition. Organizational definition explains state as a set of institutions which is controlled by the government. In this sense, government is regarded as the process of making rules, controlling, guiding or regulating. The other form, functional definition depicts state as an institution which possesses goals, purposes or objective and it functions to maintain social order (Dunleavy and O’Leary, 1987: 1-4).

This essay attempts to compare Marxism and Feminism theories in regard to depict the definition and role of the state in a further description. Both Marxists and some feminists exemplify states as social relations. When Marxists see it as class relations between bourgeois and proletariat, Feminists identify it as gender – class relations between male and female (Kantola, 2006: 123).

In further account, based on Colin Hay in His writing “(What’s Marxist about) Marxist State Theory”, Marx and Engel had never explicitly developed a single unified concrete conceptual framework about state (Hay, 2006: 65). However, their ideas about the state can be traced and developed. In his work, referred to Ralph Miliband and Nicos Poulantzas, Hay talks about the different ways of thinking about the nature of the state, instrumentalism and structuralism. Instrumentalist Marxism which is denounced by Miliband perceives state as a neutral body which is controlled and manipulated by dominant class in capitalist society in order to serve their interests. In other way, in structuralism, state is demonstrated as an objective structure which is imposed by mode of production (Hay, 2006: 71-76)

Moreover, due to the attempt to investigate more about the theory of the state in comparative way, this essay will focus on the instrumentalism to analyse the state in Marxism and compare it with state concept in feminism. Nonetheless, the relevancy of the comparison will also depend on the pertinent element of feminism theory. In feminism, there is a typology that divides the theory into several main elements which are liberal feminism, radical feminism, and Marxist-feminism. Moreover, concerning about bringing a compatible outcome for this essay, a proportional element that is perceived as a suitable element to be compared with instrumentalism in Marxism theory is liberal feminism which indicated the first wave of feminism. As an outcome, this essay is expected to be able to provide a proper argument about which theory is deemed to be a relevant thought to explain the concept of state.

The History of State Concept

The term “state” was first used in Mesopotamia era around 3000 BC to indicate the systems and processes of political governance. Somehow, the concept of state had not appeared until seventeenth century when the transition epoch from hunter-gatherers to an organized agriculture. The emergence of state concept in organized agriculture era was triggered by the absence of systematic control and adjustment of agricultural production. The need to compose a new strategy to control using an organized system in order to produce a valuable outcome in agricultural system forced state to utilise a coercive power. In this organized agriculture, state was delineated to have control over certain territory and legitimacy to exercise power over its population (Hay, Lister, 2006: 5).

The concept of state was frequently developed from the ancient era of Mesopotamia to western Europe era when the concept of modern state appeared. The modern state is defined as institutional complex which claims for sovereignty and political supremacy over delineated territory in which the control is handled by the government in charge. Moreover, the very first written concept of state was conducted by Machiavelli in ‘The Prince’. Machiavelli depicts state as a characterized body in which there is a political regime that claims and preserves certain geographical area and this regime is required to perpetuate such authority. Furthermore, in the next development in Renaissance era, state was perceived as distinct apparatus of government who had role to maintain the rules that they made in order to retain their position. In the other version, Thomas Hobbes defined state as the body that has absolute and singular authority in which either rulers or ruled have obligation to put their fidelity upon the state (Hay, Lister, 2006: 5-7).

To make it clear about the definition of state, it is appropriate to look at it from Marxism and Feminism approaches

Instrumentalism Marxism

In general, state in Marxism is perceived as social relations between bourgeois and proletariat in the mode of production context which is more focus on economic view. Despite this definition is derived from Marxism point of view, Hay argues that the definition of the state offered in Marxism theory is more likely to be unclear or implicit (Hay 2006: 60). This condition then lead to an uncertainty about the definition of state rendered by Marxism theorists. However, many Marxists have attempted to develop state concept in advance way. One of the prominent works that brings understanding about concept of the state in Marxism is Instrumentalism. Instrumentalism which was promoted by Ralph Miliband sees the state as an instrument which is under ruling class control. Instrumentalist identify state as a neutral body that can be manipulated by the dominant class or in this class ruling class.

This control had by ruling class is achieved because ruling class can maniplatedInstrumentalists identify state as a neutral body

Provide the definition of state in Marxism theory.

Instrumentalists recognize state as neutral

Liberal Feminism

Judith Allen: “where feminists have been interested in the state their ideas on its nature and form have often been imported from outside (hay, Lister, page 13)

Comparison

These theories also focus on class struggle in which explain the relationship between exploited and exploiter. As Marx and Engels ever mentioned in Communist Manifesto: “the history ever exists in society is the history of class struggle”.

Does the end justify the means?

In any situation that arises, regarding the methods and actions one commits in order to obtain a certain goal, there will always be complications. Whether the final result is just or not, remains to be decided upon by the population encompassing the society and later generations which follow it. However, in order to get to that point, one must do whatever they can to attain the means of which they want. The phrase “the end justifies the means” is one which argues for the idea of how the process is not relevant and the only things which matter are the results. Even in the fast moving world today, people would rather see something simply done and not how it is done. Niccoló Machiavelli believes in this “end justifies the means” concept, but I believe that the issue with it is that others getting hurt in the process while one attempts to attain their goals without the thought of others.

From this phrase, certain individuals who commit unnecessary evils abuse the meaning of it to justify their malice and why it is okay for them to achieve their means; this is due to the innate selfish characteristic in which all human beings possess. It does not matter how noble they believe their cause is or the results they envision because no end is just if the means aren’t just as well. You want this. He wants that. She wants him. The child wants a toy. Students want no homework. Society is dominated by people who want what they can’t have. What we don’t have then becomes a mission that drives us to do anything in our power to accomplish it, even if it may require acts that go against what we believe in or were taught.

The previous statement can then be proven by the history of the United States. America was built off of this idea— the end justifies the mean. We left the rule of Britain for a supposedly just reason which was for freedom, but in doing so, we unjustly took over the land from other persons. By colonizing the lands from the Native Americans, people went against the reason they left Britain. They left for rights but stole the right of land from the native people. Even Machiavelli says this is the worst crime of all because the property of a man is more important than even a man’s life as he exemplifies here, “above all, he should avoid the property of others” (Jacobus 92). These cases are still relevant today with the pipeline situations all across America. Perhaps the most notorious one is the South Dakota pipeline, where we are simply stealing land from the Native people yet again because of our selfishness. This desire to make oil production more efficient has crossed over the lines and taken over the sacred sites of the Standing Rock Sioux tribe. America wanted this land and they got it. We got what we wanted, but from that, we hurt the natives by stealing their lives through taking their property and possibly contaminating their resources. We are a society which disregards others when it benefits us.

America is a country that is being dominated by gaming systems. The games we play can come from our iPhone, PlayStation, Xbox, or computers. When we play, our natural competitiveness drives us to do whatever we can to win. The common method one will commit to win is cheat. It can derive from looking at the screen of another player when playing shooting games to find them, using a third party product system to have enhanced abilities, or even messing with another person’s controller. After one cheats and wins, how does the other person feel? We, the winners, feel great because we won, but what about the individual who knew you cheated to win? This hurts their feelings. It is stealing. You stole the win away from them by using any means necessary and leave them, sitting there, in sorrow. There was no fairness. Playing these games nurtures our selfishness and solidifies the idea of the end justifying the mean.

In the end, we can always get what we want. However, was the process worth the success? Is hurting people on the journey towards your goals qualified? One of the worst things a human can possibly do is harm another citizen. Not only can this belief lead to physical pain, but it can also hurt one emotionally. Therefore, I want to make the statement that the problem with believing in the ends justifying the means results in the suffering of others which is a problem we experienced back then and continue to experience still, today.

Analysing sustainability – Central Park Sydney/Riverside One Middlesborough: essay help free

COMPREHENSIVENESS:

A comprehensive sustainability framework covers a myriad of individual facets that in unison has a larger impact. To analyze the comprehensiveness of the two sustainability frameworks, five major aspects, namely environmental, social, economic, cultural and governance are taken into consideration.

CENTRAL PARK, SYDNEY:

Environmental aspect:

“Australia’s greenest urban village”, Central Park chiefly addresses the environmental aspect of sustainability with a major focus on energy and water (Centralparksydney.com, 2017). The tri-generation energy plant operates on natural gas to produce low carbon thermal energy as well as electricity; significantly minimizing greenhouse gas emissions addressing energy efficiency (Centralparksydney.com, 2017). On the other hand, Central Park Water functions as a system catering to the water cycle delivery and management (Centralparksydney.com, 2017). The recycled water utility system incorporates treatment of sewage, rain and storm water to supply non-potable water, whereas, potable water is achieved via the public water main (Centralparksydney.com, 2017; Financial Review, 2017; Network.wsp-pb.com, 2017). Through a natural approach, Central Park emphasizes resource efficiency to address water scarcity and reduce carbon emissions (Network.wsp-pb.com, 2017).

Economic aspect:

Central Park integrates an affordable and economically viable sustainability framework for its occupants. As a result of the recycled water network, there is a reduction in the use of drinking water by 40-50%, minimizing the expenses of residents at the same time (Centralparksydney.com, 2017). Moreover, the tri-generation plant supplies energy for heating and cooling- hot water supply and air-conditioning, hence minimizing electricity costs (Financial Review, 2017). The power plant is a part of long-term funding by a $26.5 million Environmental Upgrade Agreement (EUA) for green infrastructure (Centralparksydney.com, 2017).

Social aspect:

The high density living across 2200 apartments in Central Park led to the consideration of harmony and community sustainability (Financial Review, 2017). Community consultation sessions were carried out in 2007- 2008 to understand the necessities and perspectives of the local community comprising of residents, employees and owners of businesses, and stakeholders of planning and infrastructure, such that it could be integrated into the final design of Central Park (Centralparksydney.com, 2017). Therefore, the sustainability framework encompasses social aspects of sustainability i.e. communal living.

Cultural Aspect:

The sustainability strategy of Central Park incorporates a concentration on the heritage and cultural aspects of the site and the locality as a whole. Located in the heritage suburb Chippendale, which is “a mecca for art, design and culture”, Central Park began as an urban-renewal project of the Carlton United Brewery site with its history dating back to 1835 (Centralparksydney.com, 2017; Financial Review, 2017). As an attempt to retain the cultural value, urban conservation experts came together ranging from archaeologists, heritage consultants to architects (Tzannes Associates, Sydney; Centralparksydney.com, 2017). Revitalization of the buildings such as the flagship Brewery Yard building, pubs and warehouses to semi-public venues housing historical artifacts is one of the attempts at preserving the cultural value of the site (Centralparksydney.com, 2017).

Governance:

The sustainability framework of Central Park was formulated with the participation of green living expert- the Institute for Sustainable Futures at the University of Technology Sydney and Elton Consulting (Centralparksydney.com, 2017). Moreover, Central Park emerged as a collaborative project involving a close integration between disparate ownership and stakeholders such as the developers (Frasers Property Australia and Sekisui House Australia), NSW Department of Planning and Infrastructure and the residents/ tenants of the mixed-use facility (Network.wsp-pb.com, 2017).

RIVERSIDE ONE, MIDDLESBROUGH:

Riverside One was developed with the concept of enabling sustainable lifestyles of the community residents; hence integrating One Planet Living principles (Riversideone.info, 2017). Each of these 10 sustainable principles are closely linked with environmental and economic aspect- zero carbon, zero waste, sustainable materials, sustainable transport, sustainable water, local and sustainable food, land use and wildlife, equity and local economy, social and cultural aspect- health and happiness, and culture and community (Bioregional, 2017). Hence, the management of facilities, provision of amenities, overall design, materials used and construction practices adopted correspond to the principles of One Planet Living (Thomson and El-Haram, 2011).

Environmental aspect:

Also known as Community in a Cube (CIAC), Riverside One was developed as a zero-carbon mixed-use development (Good Homes Alliance, 2014). Extensively considering the environmental aspect of sustainability, Riverside One encompasses a wide range of measures conforming to highest environmental standards (Rose, 2012). The variation of sustainable building materials includes 400mm-thick exterior walls enclosing wood fiber insulation, roof tiles of recycled car dashboards, concrete made from recycled hardcore aggregate and recycled oil pipelines for foundation piles (Parnell, 2012). Along with achieving high thermal performance through the building fabric, a biomass boiler operating on wood chip entirely caters to internal heating and provision of hot water; hence “exceeding an Eco Homes Excellent rating” (Frearson, 2013).  Other environmental initiatives include power points for electric cars and planter boxes placed in the public spaces to address local and sustainable food (Sean Griffiths Output 4: Riverside One, Middlesbrough, 2011, n.d). However, other environmental conscious principles of One Planet Living such as locally sourced materials and rainwater harvesting remain absent in the project (Wainwright, 2012).

Economic aspect:

One of the reasons Riverside One came to life was because of the high unemployment rate; the concept revolved on stimulating economic activity and providing jobs in the mixed-use facility itself (Thomson and El-Haram, 2011). Similarly, local contractors were appointed for the construction (Sean Griffiths Output 4: Riverside One, Middlesbrough, 2011, n.d). Sustainable measures such as high ceilings to promote natural ventilation, high thermal insulation, biomass boiler for heating and hot water collectively is an economic benefit to the residents (Sean Griffiths Output 4: Riverside One, Middlesbrough, 2011, n.d). However, economic sustainability is not extensively covered in the framework.

Social/ Cultural aspect:

“CIAC is clearly a development of the Brutalist idea of housing a whole community in a single building” (Parnell, 2012). Incorporating an 82-unit apartment scheme placed above a restaurant and commercial space, the idea was to deliver “a mix of unit types and occupiers within a volume” (Sean Griffiths Output 4: Riverside One, Middlesbrough, 2011, n.d.; Parnell, 2012). Riverside One has addressed both the social and cultural aspects of sustainability into a single topic- culture and community; hence considering these factors of sustainability, community interaction has been stressed upon. The inclusion of courtyards, community garden, shared amenities such as cycle storage and parking spaces, and interactive circulation routes to link the public spaces vertically (Frearson, 2013; Good Homes Alliance, 2014).

Governance:

A sustainable housing project, Community in a Cube (CIAC), Riverside One is a joint venture of developers- BioRegional and Quintain (Frearson, 2013). However, the partnership between the two companies terminated after the completion of Community in a Cube, primarily due to the recession and a need to focus on their individual businesses.

Comparison of Comprehensiveness:

Comparing the two sustainability frameworks in terms of widespread coverage of aspects in a specific manner, Central Park was more comprehensive than Riverside One. Considering that each factor go hand in hand, all five aspects of sustainability have been equally emphasized upon to deliver a comprehensive framework in Central Park. On the other hand, Riverside One predominantly focused on the environmental aspect and slightly on the economic and community (social/cultural) aspects based on the ten principles of One Planet Living. However, there was not a clear distinction between social and cultural aspects, while governance was not addressed appropriately.

RESILIENCE:

Addressing resilience in a sustainability framework allows the project to remain intact and efficient through the changes in circumstances in time. Resilience approaches can be categorized into economic and environmental measures that increase self- sufficiency of the project during its functional life cycle.

Central Park:

The two major sustainable measures in Central Park cumulatively correspond to resilience of the project. The energy efficiency of the low carbon tri-generation energy plant is double than that of a coal-fired power plant (Centralparksydney.com, 2017). This resonates to environmental resilience as it could reduce the greenhouse gas emissions by 190,000 tonnes over its 25-year design life (Centralparksydney.com, 2017). Along with providing affordable energy, the reduction in reliance on the local electrical grid is a form of self- sufficiency and economic resilience (Network.wsp-pb.com, 2017). On the other hand, the recycled water network is an efficient system of potable and non-potable water utilization, both adding to economic and environmental resilience. A reliable and sustainable source of water is delivered to the occupants through the combined system of treatment plant and Sydney water mains (Network.wsp-pb.com, 2017).

Starting a blog/planting a church in Spruce Pine, NC

So, why am I deciding to start a blog in 2017?  Doesn’t it seem a little late?  Probably.  I’m sure I should of started this a long time ago, but I didn’t.  So, here I am now.  But, before I answer why I am now deciding to start a blog, I’ll have to answer why I decided to plant a church in Spruce Pine, NC.

Coming to the conclusion to plant a church was not an easy task.  It was a lot of prayer, a lot of sleepless nights and a lot of discussion over meals with close friends and family.  But, what kept me up at night?  It was passages of Scripture like the beginning of Psalm 42:

“As a deer pants for flowing streams, so pants my soul for you, O God. My soul thirsts for God, for the living God. When shall I come and appear before God?  My tears have been my food day and night, while they say to me all the day long, “Where is your God?” These things I remember, as I pour out my soul: how I would go with the throng and lead them in procession to the house of God with glad shouts and songs of praise, a multitude keeping festival.”

What David is saying here is “I feel like an animal that is dying of thirst.  When can I meet you again, God?”  David is in agony here.  He’s been so close to the Lord but for some reason he can’t seem to get there again, so he’s frustrated, sleepless, weeping, pleading and remembering Scripture that reminds him of God’s faithfulness while he calls out to know Him more.

Again David in Psalm 63 says:

“O God, you are my God; earnestly I seek you; my soul thirsts for you; my flesh faints for you, as in a dry and weary land where there is no water as in a dry and weary land where there is no water.”

The NIV version of the Bible uses the word “yearns” here instead of “faints”.  Yearn… that’s an intense word, right?  Like it’s this deep wanting, that’s hard to explain.  This “I just have to have it” kind of mentality.  We often like to talk about how “Jesus is our friend” but here’s the thing.  I have some close friends.  But I don’t “yearn” for them.  My soul doesn’t thirst for them.  If we could be honest for a moment, doesn’t this almost sound lustful?  Don’t freak out, I said “almost”.  And I could pick a dozen passages that mirror this deep longing for the Lord.  But it’s not just biblical guys like David, the Prophets or Paul.  Let’s look at a couple other guys.  Martin Luther says:

“Oh I wish to devote my mouth and my heart to you…do not forsake me, for if ever I should be on my own, I would easily wreck it all.”

Spurgeon says:

“I thank Thee that this, which is a necessity of my new life, is also its greatest delight. So, I do at this hour feast on Thee.”

I’ll move on after this last one because it may just gross you out a bit.  Brother Lawrence said:

“I have at times had such delicious thoughts on the Lord I am ashamed to mention them.”

What do you do with that?  That’s extreme right?  Like without the word “delicious” that quote makes sense.

But the point is, passages and quotes like this kept me up at night.  They kept me tossing and turning.  Why?  Because men biblically and historically longed for the Lord with agony, with all their heart, soul and might.  The thing that kept me up at night was this question, “Why don’t we?”  Why don’t we long for Jesus like that?  Why are we so content with the way things are?  Why aren’t we bothered by such a gap in our attitudes towards the Lord?  Like I just don’t see Jesus being enjoyed like these men enjoyed him.

So, we planted a church after much prayer and discussion to draw people into the kind of enjoyment in the Lord that we read about in the Bible and history books.  To see lives transformed by the Gospel of Jesus.  We’re not looking to build just another church.  But we want to transform Spruce Pine with the glory of Jesus.  So that he might be enjoyed and yearned for and thirst for.

Ultimately, we landed on this vision statement to encapsulate why our church exists:

We exist to see God glorified and enjoyed through Gospel-saturated Worship, Community, Service and Multiplication.

To extend the work of The Grove Church in seeing God glorified and enjoyed, I’ve decided to finally start a blog.  The blog will be the ramblings of a guy just trying to change the world with the message of Jesus.  We’ll talk about things I didn’t have time to dive into during sermons, current events, life, our community, Jesus and anything else that would glorify God.  Feel free to follow along or visit thegrovesp.com to find out more about our church.

Persepolis

Traditionally, graphic novels are thrown into the category of comic books.  This means that usually they are not taken seriously and are assumed to be humorous. However, Persepolis is much different than a traditional comic book. While it does use humor, it carries as much weight as a traditional novel. Marjane Satrapi makes her graphic novel humorous and enjoyable because it is filled with the playful innocence of her childhood memories. Children see the world in a different way than adults. Satrapi uses real-life humor to make light of the critical situations she is growing up being exposed to.  The innocent, childlike humor along with the graphics makes Persepolis easy to become absorbed in. Connecting with characters in the graphic novel is made easy with the humor revealed as reactions to horrors in their life. The drawn images paired with the comments on events allows for easy visualization of facial expressions, moods, and reactions throughout the telling of the story that would otherwise be lost if Persepolis were a traditional non-fiction novel.

According to Merriam-Webster dictionary, a graphic novel can be defined as “a story that is presented in comic-strip format and published as a book”. Imagery throughout novels allow readers to create their own individual meanings of parts to the story. Imagination must be used to try and envision what the writer has put in front of you. However, when actual images are present along with the words of the story, less dependence is put on the imagination. With graphics, authors are allowed the room to most accurately portray the points they are trying to get across. For example, the author’s words alone may be taken seriously, but when paired with an image of a facial expression it is revealed that the words are sarcastic.

Will Eisner singlehandedly pioneered the way of graphic novels. Eisner’s career successfully began in the early forties as he used his images to communicate with military members (Vulture). As the Vulture website states, Eisner is commonly known to many as the “father of the graphic novel”. This makes a lot of sense considering he even coined the phrase “graphic novel”. Eisner is quoted by the Vulture saying “I had finally settled on the term ‘graphic novel’ as an adequate euphemism for comic book” (Vulture). Eisner also created the image included above. It is fitting that the “father of the graphic novel” would be the person to create such a great piece of art to be used to discuss himself. The image shows a man firmly grasping a boy by the arm, showing how intensely graphic novels would soon be hitting markets. Graphic novels have changed the world of writing and brought a new meaning to writing and Will Eisner is to be greatly thanked for his role in introducing them.

As graphic novels have recently shaken the world of literature, and continue to, it must be thought, what makes Persepolis as great as it is? What would be lost if Persepolis were simply a traditional novel? When the fine words written by Marjane Satrapi are paired with magnificent drawings, a story of a revolution is made relatable. A simple history lesson on the Holocaust may begin to get boring after a while. However, while reading The Diary of Anne Frank or visiting a Holocaust museum, this simple history lesson is given a new meaning and made real. Readers are able to truly connect to literature once they are given the correct platform to by the author. Should Persepolis have been a traditional novel, it would be another boring history lesson. However, Satrapi knew the story well enough to understand that it would be better understood when paired with graphics. Without the images, readers would lose the personal connection felt with Marji. The viewing of Marji growing and getting older would be lost along with facial expressions, mood changes, and reactions. Persepolis is a unique story to be told and heavily relies on the images throughout it to fully portray the importance of the story and how it affected the lives of those involved. Satrapi’s pairing of literature and graphics allows a white, American girl to feel as though she too has felt the pain of and lived through the Iranian Revolution alongside Marji.

While there is obvious importance to the images in Persepolis revealing key parts of the story, there is a significance that may commonly be missed. Throughout the novel, readers are allowed to observe the progression of Marji’s growth. It is not written that Marji notices herself getting taller or realizes how much older she is getting because the images clearly show the advancements of the stages of her growth. However, part of what makes Persepolis such a brilliantly executed graphic novel is the humorous innocence of viewing the story from the perspective of a child. While the literature in the novel is written in a Childs perspective, the images paired along greatly adds a childlike effect to the entirety of the book. Children are notorious for drawing. Whether it be scribbles or doodles, on the walls or on paper, children begin to express themselves at an early age with what they draw. The drawings in Persepolis accurately tap into this fact to further intensify and exemplify the childlike viewpoint.

Marjane Satrapi accurately guides readers through the events of the Iranian Revolution. However, she does not offer textbook facts or extensive research to give knowledge on what happens. Instead, she uses her memories of her childhood. Satrapi tells the story of the Iranian Revolution as she remembers it and how she recalls living through it. The connection with readers and this real-life event would be pulled away significantly if Persepolis were written as any traditional novel. Persepolis is meant to be written as a graphic novel so it can be most accurately portrayed in a way that would be lost without the images paired with the story of the Revolution.

Levitz, Paul. “Will Eisner and the Secret History of the Graphic Novel.” Vulture, 10 Nov. 2015,
“Graphic Novel.” Merriam-Webster, Merriam-Webster
Satrapi, Marjane. Persepolis: The Story of a Childhood. Pantheon, 2003.

Application of mechatronics in medicine and treatments: college essay help online

Paragraph IA

The first most significant application of mechatronics is in medicine. To begin with, robots using in medicine are in a day-to-day evolution from the past to the future. One of the valuable parts of this evolution is the history dating back to the present. The thought of using robots in medicine for the soldiers which are injured frontline in war is to begin to be shaped by The United States Department of Defense. Therefore in the National Aeronautics and Space Administration (NASA) Ames Research Center, researchers have begun to work extensively in order to allow usage of robotic in medicine. After NASA’s initiative, many research centers have contributed to the overall development of robots. Today, there are a lot of devices which are created to enhance the conditions in medicine. Therefore robots must be reprogrammed, renew or rebuilt according to technology and requirements. For example, Soleimani, Moll, Wallace, Bismuth, and Gersak asserted that trans-urethral resection of the prostate was accomplished by using a unique PUMA in 1988 at Imperial College, London. After developments, PUMA transformed to SARP (Surgeon Assistant Robot for Prostatectomy) and it was used successfully in -maybe the first- robotic prostate surgery in 1991 in Shaftesbury Hospital, Institute of Urology, London, UK (2011, p.617). Ideally, the rapid evolution of robotic provide patients and doctors with the opportunity to faster return to normal activities, shorter hospital stay, smaller incision, quicker resolution of pain, minimal blood loss and little scarring and these are only some advantages of using of robotic in medicine, also robots in medicine are in constant development. Another chapter that needs to be examined is future of these robots. There are projects that can shape the future in many subjects such as overcoming mechanical constraints, long-distance surgery, and robotic surgery advancing diagnostics, informatics surgery and simulation. Different methods are being tried to put these robotic processes in view. Experiments are made to the doctors and patients to find out what will change in the future correspondingly with an evolution of robotic in medicine. For instance, Soleimani, et al. found that remarkable effects of the simulation system on the behavior of interfering performance in a dangerous a carotid artery stenting (CAS) indicated by researchers according to an experiment in which 33 endovascular doctors attended in different degrees of CAS experience; unpracticed, partially or highly practiced. Particularly on the choosing of catheters and guiding to reach the common carotid artery beside the selective fluoroscopy angles (2011, p.628). It is clear that with the technological improvements and new learning techniques, abilities of doctors and medical care system make a progress.

Paragraph IB

Besides rapid evolution of robotic in medicine, robots are useful in treatments. One of the ways of using robots in medicine is rehabilitation. Preising, Hsia, and Mittelstadt defined that, “Rehabilitation is the restoration of normal form and function after injury or illness, and rehabilitation engineering is dedicated to providing assistive equipment for the disabled” (1991, p.14). Therefore, these disabled people need to be helped for satisfying their daily requirements and mobility. These individuals are confronting different obstacles such as nursing home, home care expenses, and also taking a lot of time of family members who have to look after them. Preising, Hsia, and Mittelstadt explained that the opportunity to diminish these expenses and also to work all the more productively in the public is offered by assistive gadgets for the disabled persons. Defective people with these issues are helped by some created robot frameworks as a modified HERO 2000 mobile robot (1991, p.14). According to a study by Preising, Hsia, and Mittelstadt (1991), in 1980, money that spent on study and improvement identified with innovation that would enable handicapped to look after themselves was there thousand fold more than cash that spent on individuals and equipment who cared for the handicapped people ($ 210 billion versus $ 66 million) (p.14). Considering the information above, it can be said that researchers who try to solve disabled’s problems with using technology are more interesting in rehabilitation methods than people cared for disabled. This makes us look more hopeful for the future. Another way to profit by robots in medicine is surgery. There are two classes of robots defined in the surgical area: surgical assistive robots and actually robots performed the surgery. Surgical assistive robots are programmed to locate the target coordinates in the robot’s reference frame. The robot can be controlled by a surgeon with using the arm of the robot. Before the robots are experienced in real treatment, different sort of tests are applied. Preising, Hsia, and Mittelstadt stated that 52-year-old man’s brain biopsy is taken by using a surgical assistive robot in April of 1985. It is the verification of the first example of correct biopsy (1991, p.17). Preising, Hsia, and Mittelstadt clarified that Arthobor that can be controlled by voice (20 commands have been pre-programmed) or a control panel was created and experimented at the University of British Columbia Health Sciences hospital in Vancouver, Canada. A patient’s limb is held at an exact distance and should stay for a long time in this position. A robot system that used in more than 200 surgical operations prevents undesirable motions of a limb which surgeon is not able to endure (1991, 18). In view of all this information, it can be deduced that assistive robot system is useful in the diverse area such as actual surgical procedure, and help the surgeon by keeping the tools the exact position, clenching, and also drive wanted the location. The robots also are designed in order to perform challenging and length surgeries.

Diagonal Earlobe Crease May Predict Early Atherosclerosis

Diagonal Earlobe Crease May Predict Early Atherosclerosis Without Clinical Manifestation of Atherosclerotic Cardiovascular Disease

Introduction

Diagonal earlobe crease (DEL) is defined as diagonal fold or wrinkle of the ear lobe skin, extending from the tragus towards the ear lobe. DEL can be seen in patients with coronary artery disease (CAD) (1). This condition was first reported by Frank in 1973 (2) and was recognized as a simple cutaneous marker to identify patients with CAD. In some studies, intima-media thickness of the carotid artery was shown to be increased in patients with DEL, and it has been suggested that DEL is closely related to atherosclerosis (3,4).

Carotid artery intima-media thickness (cIMT), a widely-accepted radiological marker for atherosclerosis, reportedly predicts adverse cardiovascular events (5-7) and is closely linked to cardiovascular risk factors (8,9). Intima-media thickness (IMT) is a non-invasive, early marker of atherosclerosis; an increase in this measure may reflect an increase in cardiovascular risk (10). This is an independent predictor of CVD, and may be considered as a marker for the assessment of subclinical atherosclerosis (11).

Studies in adults have shown that the measurement of the cIMT represents an excellent marker of subclinical atherosclerosis (12,13). The carotid artery has been the target in these studies because it is located rather superficially on the neck and can be easily visualized by ultrasound. Autopsy studies, however, have shown that the first atherosclerotic lesions actually begin to develop in the abdominal aorta (14). Because it is now possible to visualize the abdominal aorta and accurately assess its wall thickness (aortic intimamedia thickness, aIMT), measuring aIMT might provide a better index of preclinical atherosclerosis in high-risk children than cIMT.

Although there have been a few studies on the association between ELC and cIMT (4,20), there is not any research directly examining the relevance between ELC and abdominal (aIMT) and common femoral intima-media thickness (fIMT). In our literature review, the relationship between ELC and abdominal aIMT and cfIMT has not been investigated yet. We aimed to investigate the association of ELC with early atherosclerosis in our study. For this purpose ; We aimed to investigate the relationship between abdominal aIMT that shows preclinical atherosclerozis among ELC present and ELC absent patients.

Method

Study Population

Asymptomatic subjects, who admitted to University of Health Sciences, Trabzon Ahi Evren Cardiovascular and Thoracic Surgery Research and Application Center Cardiology Clinic for assessment of cardiovascular risk profile for screening and primary prevention purposes, were systemically screened for presence of cutaneous markers of cardiovascular disease. Among subjects screened, 52 DEL cases were identified.  A propensity score matched 52 subjects, according to age and sex, were selected as a control group from the same population pool. DEL is defined as diagonal fold or wrinkle on the ear lobe skin, extending from the tragus towards the ear lobe (figure 1). Patients with DEL in both ear lobes were included in the DEL group.

Patients with moderate to severe valvular disease including prosthetic valves, and patients with congenital heart disease, bacterial endocarditis, hematological, oncological, or an inflammatory disorder; white blood cell (WBC) count >12000 mm3; hemoglobin level <10 g/dL; ejection fraction <40%, renal insufficiency, liver or thyroid dysfunction, thrombocytopenia or thrombocytosis, as well as those who had symptomatic vascular disease such as stroke, transient ischemia, coronary heart disease, congestive heart failure, or intermittent claudication, were excluded. After exclusion of patients who met above mentioned exclusion criteria, 104 patients (52 patients with DEL and 52 patients without DEL) were included in the study. Informed consent was obtained from each participant and the study was conducted in accordance with the Principles of Declaration of Helsinki. The study was approved by the local Ethics Committee.

History of hyperlipidemia (HL) , arterial hypertension (HT), diabetes mellitus (DM), smoking and family history of coronary artery disease (CAD), noted down all patients. Patients with known hypertension history, antihypertensive drug use history, the average measurement of systolic blood pressure ≥ 140 mmHg and / or diastolic blood pressure ≥ 90 mmHg at least twice from both arms, defined as hypertension. Patients with fasting blood glucose level of ≥126 mg/dl, history of DM and antidiabetic drug use were considered as type II DM. The patients who were on anti-hyperlipidemia therapy use and had a fasting total cholesterol level ≥200 mg/dl, a fasting low-density lipoprotein level ≥160 mg/dl, a fasting triglyceride level ≥200 mg/dl were considered as HL. History of CAD or sudden cardiac death in a first-degree relatives under the age of 55 years for men and 65 years for women were defined as family history of CAD.

IMT assessment

cIMT, abdominal aIMT and common fIMT was quantified by an Esaote Mylab 50 (Esaote Biomedica, Genova, Italy) device using a 7.5-MHz linear array imaging probe. The right and left common carotid artery were selected for the study. The patients were placed in supine position, with their heads turned away from the testing side and their necks mildly bended. Proximal and distal walls of the common carotid artery were aligned parallel to the transducer’s axis, and its lumen was augmented in the longitudinal plane. The IMT was quantified at a site 1 cm proximal to the carotid bifurcation, by determining the distance from the border of arterial lumen and intima to the border between media and adventitia. A mean CIMT was obtained by averaging a total of four CIMT measurements taken from adjacent sites 1 cm apart. An experienced physician (A.K), who was unaware of the clinical and demographic data of the participants, performed all ultrasonography examinations.

Two images of the far wall IMT were obtained in the distal 10 mm of the abdominal aorta proximal to the iliac artery. For aorta images, aIMT was calculated as the mean thickness along the 10-mm length, and a mean IMT measure was then computed from the 2 images to obtain the overall aIMT value used for analysis.

Abdominal IMT ve Femoral IMT Esaote Mylab 50 (Esaote Biomedica, Genova, Italy) device using a 7.5-MHz linear array imaging probe

Statistical analyses

Data analysis was performed using SPSS (Statistical Package for Social Sciences) for Windows 19 (SPSS Inc. Chicago, IL, USA). The continuous variables were described as mean±SD or median (minimum–maximum), and the categorical variables were reported as frequency and percentage. Kolmogorov Smirnov test was used to evaluate normal distribution of numerical variables. Independent samples t test was used to compare normally distributed and Mann Whitney U test was used to compare non-normally distributed variables between the two groups. Quantitative data were analyzed using the Chi-square test. A correlation analysis was performed to assess the relationship between continuous variables and the analysis was interpreted using Spearman’s Rank correlation coefficient. The confidence interval was at 95 % and p values of < 0.05 were considered statistically significant.

Results

Istatistiksel olarak ELC absent ve ELC present grupların yaÅŸ, cinsiyet, BMI, hipertansyon, diabetes mellitus, dislipidemi, sigara içimi ve ailede CAD arasında anlamlı fark yoktu. Klinik ve demografik özellikler tablo 1 de gösterilmiÅŸtir. Ä°statistiksel olarak ELC present grubunda ELC absent grubuna göre , Carotid artery intima-media thickness ( CIMT) left, CIMT right, aortic intima media thickness ( AIMT), common femoral intima media thickness saÄŸ ve sol ( CFIMT left and right) arasında anlamlı fark izlendi ( AIMT ELC present 1,04±0,22 mm and ELC absent 0,87±0,15 mm p< 0,001 ; CIMT Left ELC present 0,52±0,12 mm and ELC absent 0,44±0,10mm ; CIMT Right,  ELC present 0,53±0,12mm and ELC absent 0,43±0,10 mm p<0,001 ; CFIMT Left ELC present 0,64±0,12 mm and ELC absent 0,54±0,13 mm ; CFIMT Right ELC present 0,63±0,11 mm and ELC absent 0,55±0,12mm p< 0,001 ) ( Figure 2)

Discussion

This is the first study that has investigated the relationship between ELC and abdominal aIMT and common fIMT in patients without clinical manifestation of CVD. The main finding of the our study is that ELC was significantly associated with cIMT, abdominal aIMT and common fIMT, independently of cardiovascular risk factors.

DEL is defined as diagonal fold or wrinkle on the ear lobe skin, extending from the tragus towards the ear lobe. In 1973, Frank first described that a bilateral or unilateral prominent ear crease in the lobule of the ear-lobe was present in a large proportion of his patients who had one or more risk factors for coronary heart disease. Several clinical studies have subsequently examined the association of the diagonal ear crease with coronary atherosclerotic heart disease [15–22]. A previous study also reported a correlation between DEL and carotid intima media thickness and epicardial fat thickness [23]. DEL was shown to be associated with vascular inflammation and oxidative stress [24], as well as cardio ankle vascular index (CAVI), which is a subclinical marker of atherosclerosis (25). In a postmortem autopsy study of 520 patients, DEL was strongly associated with CAD in both men and women [26].

It was initially proposed that both the earlobe and heart are supplied by end arteries, without the possibility of collateral circulation, which could cause simultaneous rise in both DEL and CAD (28). Some researchers found degeneration of the elastin, tear in elastic fibers, and pre-arteriolar wall thickening in cases of DEL (29). Although atherosclerotic changes in the arterial wall could include smooth muscle cell proliferation and accumulation of collagen and proteoglycans, degeneration caused by changes in the collagen:elastin ratio may be the final common pathophysiological pathway of both atherosclerosis and DEL (30).

Labropoulos et al measured abdominal aIMT using transcutaneous ultrasound in a group of adults and observed increased aIMTs in subjects with atherosclerosis (31). Autopsy studies have shown that the first atherosclerotic lesions actually begin to develop in the abdominal aorta (14).

Study limitations:

Several limitations of this study should be addressed. First, the most important limitation of this study was the small number of patients. As the study population consisted of patients who presented to our clinic, we can speculate that it would not reflect the general population.

References

1. Lichstein E, Chadda KD, Naik D, Gupta PK. Diagonal ear-lobe crease: prevalence and implications as a coronary risk factor. N Engl J Med 1974;290:615–6.

2.  Frank ST. Aural sign of coronary-artery disease. N Engl J Med 1973;289:327–8.

3. Shrestha I, Ohtsuki T, Takahashi T, Nomura E, Kohriyama T, Matsumoto M. Diagonal ear-lobe crease is correlated with atherosclerotic changes in carotid  arteries. Circ J 2009;73:1945–9.

4. Celik S, Erdog an T, Gedikli O, Kiris¸ A, Erem C. Diagonal ear-lobe crease is associated with carotid intima-media thickness in subjects free of clinical cardiovascular disease. Atherosclerosis 2007;192:428–31

5. Bots ML, Hoes AW, Kondstaal PJ, et al. Common carotid intima-media thickness and risk of stroke and myocardial infarction: the Rotterdam study. Circulation 1997;96:1432–7.

6. Chambless LE, Folsom AR, Clegg LX, et al. Carotid wall thickness is predictive of incident clinical stroke: the Atherosclerosis Risk in Communities (ARIC) study. Am J Epidemiol 2000;151: 478–87.

7. Longstreth Jr WT, Shemanski L, Lefkowitz D, et al. Asymptomatic internal carotid artery stenosis defined by ultrasound and the risk of subsequent stroke in the elderly The Cardiovascular Health Study. Stroke 1998;29:2371–6.

8. Celermajer DS. Noninvasive detection of atherosclerosis. N Engl J Med 1998;339:2014–5.

9. Davis PH, Dawson JD, Riley WA, et al. Carotid intimal-medial thickness is related to cardiovascular risk factors measured from childhood through middle age The Muscatine Study. Circulation 2001;104:2815–20.

10. Polak JF, Pencina MJ, Pencina KM, O’Donnell CJ, Wolf PA, D’Agostino RB.

Carotid-wall intima-media thickness and cardiovascular events. N Engl J Med.

2011;365(3):213-21.

11. Polak JF, Pencina MJ, O’Leary DH, D’Agostino RB. Common carotid artery

intima-media thickness progression as a predictor of stroke in multi-ethnic study

of atherosclerosis. Stroke. 2011;42(11):3017-21.

12. Salonen JT, Salonen R. Ultrasonographically assessed carotid morphology and the risk of coronary heart disease. Arterioscler Thromb. 1991;11: 1245–1249.

13. Bots ML, Hoes AW, Koudstaal PJ, et al. Common carotid intima-media thickness and risk of stroke and myocardial infarction: the Rotterdam Study. Circulation. 1997;96:1432–1437.

14. McGill HC, McMahan CA, Herderick EE, et al. Effects of coronary heart disease risk factors on atherosclerosis of selected regions of the aorta and right coronary artery: PDAY research group: Pathobiological Determinants of Atherosclerosis in Youth. Arterioscler Thromb Vasc Biol. 2000;20:836–845.

15. Frank ST. Aural sign of coronary-artery disease. N Engl J Med 1973;289:327–8.

16. Doering D, Ruhsenberger C, Phillips DS. Ear-lobe creases and heart disease. J Am Geriatr Soc 1977;25:183–5.

17. Kaukola S, Manninen V, Valle M, et al. Ear-lobe crease and coronary atherosclerosis. Lancet 1979;2:1377.

18. Lichstein E, Chadda KD, Naik D, et al. Diagonal ear-lobe crease: prevalence and implications as a coronary risk factor. N Engl J Med 1974;290:615–6.

19. Shoenfeld Y, Mor R, Weinberger A, et al. Diagonal ear lobe crease and coronary risk factors. J Am Geriatr Soc 1980;28:184–7.

20. Elliott WJ. Ear lobe crease and coronary artery disease 1000 patients and review of the literature. Am J Med 1983;75:1024–32.

21. Mehta J, Hamby RI. Diagonal ear-lobe crease as a coronary risk factor. N Engl J Med 1974;291:260.

22.Jorde LB, Williams RR, Hunt SC. Lack of association of diagonal earlobe crease with other cardiovascular risk factors. Western J Med 1984;140:220–3.

23. Ziyrek M, Şahin S, Özdemir E, Acar Z, Kahraman S. Diagonal earlobe crease associated with increased epicardial adipose tissue and carotid intima media thickness in subjects free of clinical cardiovascular disease.Turk Kardiyol Dern Ars. 2016 Sep;44(6):474-80.

24. Koyama T, Watanabe H, Ito H. he association of circulating inflammatory and oxidative stress biomarker levels with diagonal earlobe crease in patients with atherosclerotic diseases. J Cardiol. 2016 Apr;67(4):347-51.

25. Levent Korkmaz, Mustafa Tarık Ağaç, Hakan Erkan, et al: Association between Diagonal Earlobe Crease and Cardio-Ankle Vascular Index in Asymptomatic hypertensive Patients. Med Princ Pract 2013;22:530–534.

26. Edston E. The earlobe crease, coronary artery disease, and sudden cardiac death: an autopsy study of 520 individuals. Am J Forensic Med Pathol. 2006 Jun;27(2):129-33.

27. Korkmaz L, Ağaç MT, Acar Z, Erkan H. Earlobe crease may provide predictive information on asymptomatic peripheral arterial disease in patients clinically free of atherosclerotic vascular disease. Angiology. 2014 Apr;65(4):303-7

28. Friedlander AH, López-López J, Velasco-Ortega E. Diagonal ear lobe crease and atherosclerosis: a review of the medical literature and dental implications. Med Oral Patol Oral Cir Bucal 2012;17:153–9. Crossref

29. Wermut W, Jaszczenko S, Ruszel A. Ear lobe crease as a risk factor in coronary disease. Wiad Lek (in Polish) 1980;33:435– 38.

30. Zureik M, Temmar M, Adamopoulos C, Bureau JM, Courbon D, Thomas F, et al. Carotid plaques, but not common carotid intima-media thickness, are independently associated with aortic stiffness. J Hypertens 2002;20:85–93. Crossref

31. Labropoulos N, Zarge J, Mansour MA, et al. Compensatory arterial enlargement is a common pathobiologic response in early atherosclerosis. Am J Surg. 1998;176:140–143.

Rome’s Model and idea: essay help online

Introduction

Building and construction seems to have begun as the same time as human existence as shelter was a requirement. Human beings needed to be shielded from environmental factors such as rain and solar heat. Although the structures back then were constructed of temporary materials such as grass and mud, they still required prior planning and preparation. As time went by, people came up with more complex designs and ideas that called for detailed planning. Building materials also had a more permanent nature, thus the construction of permanent buildings. Some of the earliest nations to come up with permanent structures with unique designs included the Egyptians and the Persians; others such as the Greeks and Etruscans were not left behind (Ambler, nd). The structures in the above named nations were appealing on the exterior but had limited interior space due to the mode of construction used. However, the Romans architects are the ones who came up with designs that still inspire architects in the modern world and were also an inspiration during the construction and development of medieval cities (Ambler, nd). The structures in Rome had ample interior space and were appealing on the outside, thus making them outstanding (Ambler, nd). The Rome idea and model also includes factors such as culture and Christianity under which medieval cities were developed. The aim of this article is to discuss the idea of Rome and how it was an inspiration in the development of other medieval cities.

The idea and model of Rome and how Medieval Cities developed from the same

It is important to look at the idea and model of Rome in order to better understand the inspiration drawn from the same in the development of other medieval cities. Rome was characterized by peace and prosperity, sponsorship of high cultural practices while still considering the needs of the vulnerable in the society and a legal system that ensured justice for all; such was the model of Rome which inspired the idea of Europe (Nicols, nd). The peace among the people of Rome ensured that people such as the architects worked together to come up with various structures. The characteristics of Rome were witnessed in the early Europe as well as the modern Europe. Most medieval cities in Europe were developed based on peace and the cultural practices that were adopted from the model of Rome. The cities that came up after the fall of the Roman Empire carried own with the rituals that were being practiced in Rome before the fall. When it comes to peace, Europe has enjoyed three decades of peace as a result of the founding fathers of Rome who ensured peace and tranquility (Nicols, nd).

As seen earlier, the idea and model of Rome revolved around many things such as building and construction, road networks, peace and prosperity, theaters (to ensure preservation of the Roman culture), legal system, national language and public works undertakings. All these led to the development of a collection of cities, giving rise to Rome. These cities developed as a result of urbanization and civilization which led to Romanization (Nicols, nd).  Small cities developed to become the bigger city of Rome. To this day, the urban culture is evident throughout Europe and it hails from the Rome idea and model. The idea and model of Rome has influenced various cultures across the world. The Roman law was adopted by nations such as the United States (Houghton Mifflin Company, nd). The legal system has been identified as an outstanding factor of the Rome model. The Roman literature is also a characteristic that has lived on even after the fall of the Roman Empire as people read it to this day.

From the introduction, Rome was identified as the first to be in a position to construct structures that had ample space in the inside. The Roman developed a bigger and more prominent arch which was adopted during the construction of medieval cities (Houghton Mifflin Company, nd).Although the Romans were not the inventors of the arch, they were the first to build one which had extensive area in the interior. With time, other people adopted the idea of Rome and started constructing longer and stronger archs.

Building and construction was one part of the model of Rome. The Romans constructed monuments that are still mentioned in history today and they are an admiration to this day (Nicols,). Among the most significant item in the construction industry by the Romans was the arch. As seen earlier, the arch was never invented by the Romans, but they did improve the same to a point that it became emulated by other cities across Europe. The Romans constructed a larger arch than the ones that had previously been constructed by the likes of ancient Egyptians and Greeks (Houghton Mifflin Company, nd). The arch by Romans was constructed in such a way as to bear large amounts of force. The Romans used concrete to put up their buildings hence were able to build large buildings such as palaces and government premises (Houghton Mifflin Company, nd). The concrete was made from a mixture of lime and sand from volcanic eruptions. From this concrete, the Romans constructed aqueducts, hence were able to provide water to their cities (Houghton Mifflin Company, nd).

The aqueducts ensured that the Romans accessed clean water, thus maintaining sanitary conditions throughout Rome. The ready supply of water facilitated the construction of water baths which were used as a luxury (Ambler, nd). The latter was adopted by other medieval cities across Europe. The Byzantine architects (from Eastern Europe) and the Romanesque (from Western Europe) were the first to apply the Roman arch culture in the development of their cities (Houghton Mifflin Company, nd). It can be seen that Rome successfully provided a platform for the construction and development of medieval cities across Europe.

True urban planning started at the same time as true settlement of people in urban areas at around 3,000 B.C in places such as Egypt, Mesopotamia and Indus Valley (Ellis, nd). The urban centers were designed in such a way that the represented political, military as well as religious dominance. In other terms the cities were divided in two urban forms namely organic and planned (Ellis, nd). Those that fell under the planned form belonged to the elite in the society while those falling under the organic form were the residential areas which registered slow growth with irregular patterns. While other cities grew slowly and followed certain patterns, it was different for Rome. Romans architects and builders are known to have engaged in vigorous city-building activities; a characteristic putting Rome ahead of other cities and making it one to be emulated (Ellis, nd). Some of the settlements were experienced organic growth while others comprising of the most were well planned. In fact, some of the European cities such as London and Paris came up as a result of former Roman urban centers (Ellis, nd). In the illustration above, it can be seen that the model of Rome inspired the development of cities such as London and Paris.

‘Rome was not built in one day’; this saying is true as it took years to build the city of Rome. However, the time spent paid off as the planning and structures that were put up in Rome are still referred to in modern times. Apart from buildings, Romans specialized in the construction of road networks and other infrastructures that were on a different level from that of ancient cities in Egypt and Greece (Bee Breeders, nd). Rome as seen earlier was modeled using concrete and it is this aspect that led the Romans into constructing structures that were stronger than those constructed earlier by Egyptians and other ancient cities (Houghton Mifflin Company, nd). According to Bee Breeders (nd), Romans made use of concrete due to various reasons namely; concrete was stronger than any other materials at that time, concrete could be decorated with ease, many shapes could be made from concrete and since concrete was produced locally, it was cheap enough. Due to the many functions of concrete, the buildings in Rome were beautiful and of great designs. Many structures came up after the latter realization.

Romans were focused on an open plaza that was surrounded by prominent buildings (Ambler, nd). The open plaza was the heart of the city where major temples and shrines were situated. The law courts were also located within the plaza. Other important buildings that were within the plaza included among others the curia building which was used for council meetings (Ambler, nd). Magnificent structures such as the porticoes, colonnades and the fountains surrounded the fountain and they attracted travelers to the beautiful city of Rome (Ambler, nd).

Rome has a vast road network system. Rome was the first city to have a complicated road network system that was also widely spread; a network connecting the cities to the main capitol (Bee Breeders, nd). Romans could easily access different parts of the city of Rome and even conduct business. Romans constructed their roads by using a different system from the one that was used in other cities. Three levels of substructure were laid beneath the stones and the center of the road was inclined at an angle such that the rainwater could drain off (Danxner.com, nd). The Romans invented the use of signs on the road, some which could tell the distance between urban settlements. Road signs were used during the medieval period and they are still used in modern Europe. Bridges and aqueducts were also created, some of which inspired the construction of medieval cities across Europe and the world at large. The road networks constructed during the Roman era still existed in the medieval cities. Some of these roads included the Lutetia roads (Cardo maximus rue Saint-Jacques, the cardo Boulevard Saint-Michel and the cardo rue Valette) (Ellis, nd). The ideas of the Roman architects are still used by modern-day architects (Bee Breeders, nd).

Medieval cities developed after the fall of the Roman Empire. Roman and medieval cities were linked by among others bishops through the churches, monasteries, cathedrals and cloisters (Gutjahr, 1999). People who had been displaced during the dark days were attracted back to the fallen city by spiritual focal points (Gutjahr, 1999). The bishops had turned the centers in the old Roman cities into worship centers.  Christians were attracted from various parts of Europe and they gradually came in large numbers to form large settlements which later turned into cities. Therefore, it can be said that the medieval cities started mushrooming from the old Roman cities. Were it not for premises such as the churches and cathedrals constructed by the Romans, Christians could not have moved back to the old settlements. These cathedrals and churches had initially been constructed within the plaza which was the heart of the city.

Medieval cities also developed as a result of the fortresses which had been constructed by the Romans before the fall (Gutjahr, 1999). These fortresses included castles, kings’ palaces and the princely courts. The kings and princes who reoccupied the fortresses surrounded themselves with churches which in turn attracted Christians back to the urban centers; thus the development of medieval cities (Gutjahr, 1999). The immigrants felt safe close to the fortresses, thus the development of settlement areas around the castles and palaces.

The historic towns which had stood during the Roman era were attracting people back to the urban centers. These immigrants were the ones who could eventually grow to be large numbers of people to become medieval cities. After the fall of the Roman Empire, not all residents left the cities and it is such people that revived the cities once more by attracting immigrants (Gutjahr, 1999). The buildings which had fallen during the war were also being revived and it is from such actions that medieval cities developed. Remember the Romans used to construct strong structures; some of which remained even after the fall of the Roman Empire. Some structures such as the amphitheatres, courts and the baths were being used as residential areas for settlement of people (Gutjahr, 1999). The latter led to the growth of new cities from the Roman ruins.

Medieval cities also developed as a result of people regrouping for economic purposes. People needed to continue developing economically and socially and therefore had to join together (Gutjahr, 1999). People had been scattered during the war and even though some feared the repeat of the same, they soldiered on to make their lives better. The peace that existed during the Roman Empire was extended to the medieval period. Although the regrouping was slow, eventually the cities were starting to form and people from different cities could link using the road network that still existed after the Dark Age.

Romans were good city planners and their plans are being used to this day. The Romans constructed extensively and the military and colonial towns were highly planned (Danxner.com, nd). The Romans architects ensured that their city plans allowed easy maneuverability and also laid focus on authenticity; whereby they came-up with plans in which the town would be appealing to the viewer. Walls were built around the city of Rome which ensured its occupants were safe (Ellis, nd). Most of the medieval cities that were constructed using the Rome model also had a wall around them.  Cities such as Washington DC used some building plans from the Romans to put up their own buildings (Danxner.com, nd). Their plans were used during the construction of various medieval cities across Europe.

Conclusion

Rome is a city known for its dominance in architectural designs that have stood the test of time. Roman architects have been seen to be among the best in the world as their plans which were drawn decades ago are still in use in the modern world. Rome’s model and idea was used in the construction and development of other medieval cities across Europe as It has been discussed in this article. Romans invented the construction of arch using concrete, an act that remains outstanding to this day. Egyptians and other ancient cities used to build the arch but it was not as strong as compared to that constructed by the Romans. The Romans also constructed structures that were pleasing to the eye on the outside and had ample space in the interior. The road networks, underground piping and the aqueducts built by the Romans were of magnificent designs and were used to construct other medieval cities. Most medieval cities across Europe rose as a result of the ruins of the Roman Empire, whereby bishops, Kings and princes attracted people back to the ruined cities of Romans by use of the structures that had remained after the ruins. The latter led to the growth and development of medieval cities.

References

Ambler, J. nd. Roman architecture. Khan Academy. [Online] available from https://www.khanacademy.org/humanities/ancient-art-civilizations/roman/beginners-guide-rome/a/roman-architecture Accessed on 27th August 2017
Bee Breeders. Nd. How Roman architecture influenced modern architecture. [Online] available from https://beebreeders.com/how-roman-architecture-influenced-modern-architecture Accessed on 28th August 2017
Danxner.com. nd. The influence of the Roman Empire. [Online] available from http://www.danxner.com/extramaterials/art003/Final_Project/Influences.htm Accessed on 28th August 2017
Ellis, C. nd. History of Cities and City Planning. [Online] available from http://www.art.net/~hopkins/Don/simcity/manual/history.html Accessed on 27th August 2017
Ellis, C. nd. Paris: The development of Roman and Medieval Urban forms. [Online] available from http://www.arch.ttu.edu/people/faculty/ellis_c/Paris_Lectures/2RomanandMedievalParispdf.pdf Accessed on 28th August 2017
Gutjahr, C, M. 1999. Culture and History of Urban Planning: Part 4- Medieval Cities. [Online] available from http://artserve.anu.edu.au/htdocs/bycountry/italy/rome/popolo/melbourne.planning/Part4-Medieval_Cities.pdf Accessed on 28th August 2017
Houghton Mifflin Company. Nd. The Influence of the Roman Arch. [Online] available from https://www.eduplace.com/kids/socsci/ca/books/bkf3/writing/06_romarch.pdf Accessed on 27th August 2017
Nicols, J. nd. Idea of Rome, Idea of Europe. [Online] available from https://scholarsbank.uoregon.edu/xmlui/bitstream/handle/1794/5054/Nicols_IdeaRomeEurope.pdf?sequence=4 Accessed on 27th August 2017

Mary Prudence Wells Smith – The Great Match: college admission essay help

Baseball for centuries has been characterized as America’s national pastime. Its history provides evidence as to how its cultivation in society served as a catalyst during the Civil War, and how it evolved American culture and society. Baseball’s roots stem from the early 1800s as variations of the game cricket. By the mid-19th century, several versions of the game were being played across the country and were beginning to inspire literature. One of the earliest publications about baseball was made by Mary Prudence Wells Smith in 1877 called The Great Match. Its vivid setting detail leads the book to become one of the greatest baseball books ever written. It also allows readers to experience a 19th century baseball game vicariously.

The setting takes place right after the Civil War in a small, hospitable town named Milltown. While baseball was well-established before the start of the war, in the New York area, it was largely expanded during it. “The civil war had done much for Milltown” (The Great Match, 3). War will inevitably cause low morale, and during long periods of encampments, it reached a peak. This was when New Yorkers introduced the game of baseball to their comrades from other northern states. It became so popular that Generals asked that troops promote baseball activities in their encampments because it promoted good health and kept the mind off the war. After the war ended, the soldiers brought baseball back to their homes. By 1869 the game was adapted by colleges and became a professional sport that paid its players.

In The Great Match, Grandhurst entered Milltown in hopes to find someone with the directions to the neighboring town, Dornfield. The personalities of both towns were drastic. Most of the inhabitants of Milltown were “as good as anybody;” meanwhile, Dornfield exhibited more exclusivity with cliques. A group of boys in “base ball costumes” celebrating an out-of-town victory against Milltown offered Grandhurst a ride to Dornfield in their stage coach, although he begrudgingly refused. The Dornfield players were sons of aristocracy and unconcerned with professionalism with the sport. Their town was located in fine agricultural land and could only be reached by coach. Milltown players, on the other hand, were more working-class folk who sometimes received compensation for playing. Their town was located upon a river and was the center of many manufactories.

This satire contains irony, as it was written by a female, but places a strong emphasis on manliness. The rivalry between the Dornfield Nine and Milltown was motivated by who could assert the most masculinity. The language used in this first couple chapters conveys the social status of baseball players. “…said Ned Black, extending his hand in a manly way to Dick” (5). Ned Black is the captain of the Dornfield Nine. This description of him shows the virility and power status of being the captain of the baseball team, which stereotypes similarly in modern day society.

The novel also illustrates well the logistics and supply chain of small-town baseball. Players traveled to games in a stage-coach, and Smith, through vivid setting details provides a vicarious experience for the reader. She describes the attire and the social hierarchy of male and female fans in the stands. There was seating by age, class distinction, and debates about the use of professional players. Fans debated play versus competition, and whether there should be a distinction between an amateur player versus a professional one. Smith included these sideline debates in the book because, in context, the first baseball publications were written right after the establishment of the major leagues; therefore, many people questioned whether professional play that promotes competition would wreck the fun of the game.

Through the perspective of the protagonist Molly Milton, the reader can easily understand the attitudes of female fans and their support of the Dornfield Nine. Molly’s father describes how baseball is a way for diverse interests to come together for a united cause. “This base-ball business…unites all the diverse interests in the village” (96). While gender inequality is highly prevalent during the 19th century, women knew just as much baseball terminology as other fans. Molly’s father proves that even in 1877 baseball served as the birthplace of some early social movements. In this scenario it is the inclusion of women in the discussions that is a stepping stone to more renowned movements such as Jackie Robinson––the first African-American baseball player in the MLB.

Mary Prudence Wells Smith, in The Great Match, offers great insight on the social context of the 19th century interwoven with the baseball culture. The grandiose descriptions of the inhabitants, such as Ned Black, in Dornfield and Milltown provides readers with a clear understanding of the masculine role of baseball players, and how the sport became America’s national pastime. New Yorkers introduced the sport during the civil war, and its popularity permeated across the country. It is a constantly evolving sport, and Smith is one example of how baseball not only impacted social movements, but also literary moments.

Smart cars (VW, mercedes, Ford and Toyota)

INTRODUCTION: – Smart cars has been around since 1980s. SMH, the company that produces Swatch Watches decided to create a brand of car that relates to smart accessories. After rejected by brands such as Volkswagen, FIAT, BMW, the smart project was finally accepted by Daimler-Benz, which now holds 51% of total shares. After that other brand like Volkswagen, Fiat, BMW, also started working on the smart cars. Smart cars have features like excellent gas mileage and is more convenience. Smart cars are more safe as it uses forward facing sensors like radar, camera, laser based. It also holds features like automatic parking, autopilot, lane keep assist which will help to reduce collisions and accidents. Smart cars are in demand these days. Brands like BMW, Volkswagen, Fiat, Ford, etc. are using different and more advanced techniques to make their cars safer and effective for the drivers and to make roads safer.

VOLKSWAGEN:-  Sedric was the fist smart car designed by Volkswagen and is the first vehicle in group to have been created as it does not need human driver. The car was made for the convenience for full family. Volkswagen provides services like App-connect, Guide & Information and Security and Services. App connect is a new feature. It includes Android Auto, Apple car play, Mirror Link means can easily assess to music, navigation, weather and many more app. Guide and Inform is a paid subscription. It is specially designed for city driving. It not only supports navigation but also provides information about traffic alerts, fuel prices, ski reports. It also tells about the traffic areas and destructive areas to save time and make it easy for us to reach their Respective destination. Security and Services hold old features like tracking vehicle, safety, access to communication with emergency in case of accidents. It protects the driver and other people sitting beside him in case of any collision. Volkswagen’s new autonomous car has no driver and it is self driving car or robotic car which will take the smart cars to a new generation.

MERCEDES-BENZ:- In January 2007, Mercedes benz display their first smart car project. The plastic body panels were supposed to be easily swappable to suit the mood of driver much like plastic trim on swatch watches. “Anyone who who focuses solely on technology has not yet grasped how autonomous driving will change our society”, (Dr.  Zetsch. D,2009). The innovative four seaters are a forerunner of a mobility revolution, and this is immediately apparent from futuristic appearance. Autonomous driving is taken for granted as it is acceptable in society and technology is perfectly reliable. As the autonomous car takes over from the driver in situation where driving is not much fun, it gives real added quality to spent out time on road. The transition between organic, synthetic and metal material have been designed using matrix graphics. One core theme of the innovative interior concept is a continuous exchange of information between vehicles, passengers and the outside world.

FORD:- In 2015 ford introduce a new technology Amazon Echo smart home device through which car owner could turn on their home lights or browse their playlist from comfort of their fusions or they can switch it up and asked to start their car from inside the homes. It also uses voice activation system that lets you use your phone while your hands are on wheel and eyes on road. You can place calls, play music and can stay interact with navigation system without using your hands. If you do not have your keys with you, you can simply unlock the driver’s door with a touch of handle. To fire up the ignition, just put your feet on break press START/STOP button and you are away.

TOYOTA:- Toyota, BMW and Chrysler designed a beautiful smart concept for the future of Cars. Featuring gull-wing doors, a unique styling, and a pulsing centre console. Toyota’s concept is clear: make car more safe and comfortable for drivers as well as the other people sitting beside him. Interior and exterior graphics are used to share your trip details. Lighting helps with ambiance and to alert occupants of certain things happening around car on and off the road. It also includes the driverless autonomy but still integrates a manual mode for those not ready to let a car do the driving themselves. Foe Toyota, keeping the car warm, friendly and intuitive in design for occupants was a priority. While looking for a smart car, the consumers want to feel comfortable sitting in one which is why Toyota chose to develop this route. The interior is designed to feel serene, with bright whites and transparency throughout the car so you can see the world and feel the beauty of life. Overall, it feels comfortable and feels cozy.

REFRENCES

Morin.A,2015 “The smart: A Brief history.” http://www.guideautoweb.com/en/articles/35822/the-smart-a-brief-history/
Preston.B,2017 “Rise of the Robocar.” https://www.theguardian.com/technology/2017/aug/13/robot-connected-cars-hacking-risks-driverless-vehicles-ross-now
Posner.A,2008 “Smart car: How Smart It Is?” https://www.treehugger.com/cars/smart-car-how-smart-is-it.html

What did the Aboriginal Embassy actually achieve?: online essay help

Since the colonisation of Australia, Aboriginal and Torres Strait Islander people have experienced mistreatment and injustice. However, during the Indigenous rights activist movement of the 20th century, there were many turning points that inspired change within Australia. Although there were many events that had led to better treatment of Aboriginal people, the establishment of the Tent Embassy in 1972 has great significance. In this essay, it will be discussed what the Tent Embassy is, what led to its resurrection, the aims and outcomes of this movement, and why it is such a significant event.

The Tent Embassy, or Aboriginal Embassy, is an ongoing establishment, which advocates for Aboriginal political rights. It was initially composed of a beach umbrella and signs, which was soon replaced by several tents (Briscoe, 2014; Iveson, 2017). Over the years, it has been a site of political controversy, and as such has been taken down on several occasions (Iveson, 2017). However, it has always been re-erected and still stands today from its final re-establishment in 1992. Although it is not recognised as an official embassy by the government, it was listed on the Australian Registrar of National Estate in 1995, as a site in Australia symbolising the political struggle of Aboriginal and Torres Strait Islander people (The National Museum of Australia, 2007).

In 1966, a movement began as a push for political recognition of Aboriginal land rights. This would mean that not only would Aboriginal people have recognition of their connection to land and water, as it were later outlined in the Native Title legislation; but also compensate them for the past dispossession of their land (Foley & Anderson, 2006; Curthoys, 2014). In 1972, five years after the referendum that allowed Aboriginal people to be included in the census, Prime Minister McMahon made the decision to announce the rejection of the proposed Land Rights for Aboriginal people (Foley & Anderson, 2006; Foley, Schaap & Howell, 2014). Even more devastatingly, he chose the symbolic date of January 26th to do so; the date known as Australia Day, Invasion Day, or Survival Day (Foley & Anderson, 2006). This statement sparked fury amongst Aboriginal activists, and within a matter of hours, they were on the lawns of Parliament House in Canberra to protest (Foley & Anderson, 2006). Four activists began this protest by erecting an umbrella with a sign, declaring themselves the “Aboriginal Embassy” (Foley & Anderson, 2006). This act and its dubbed title gave attention to the fact that Aboriginal people were seen as foreigners on their own land (Foley & Anderson, 2006; Curthoys, 2014).

On the February 5th 1972, the Aboriginal Embassy sought to formalise the demands of their protest. Their demands involved a five-point plan for Aboriginal land rights (Foley, 2001). The plan pushed for the Northern Territory to be entirely Aboriginal with legal title and mining rights, to be granted legal title to all existing reserve lands and settlements throughout Australia, the preservation of all sacred sites within Australia, legal title and mining rights to areas around the capital cities of Australia, and compensation starting at a minimum of six billion dollars, plus a percentage for lands that were unable to be reclaimed (Newfong, 1972; Foley, 2001; Pieris, 2012). On several occasions, the embassy was led to believe that it had seen victory. Politicians made promises of freehold title and ownership of land. However, many of these were empty promises (Foley, 2011; Nicoll, 2014). During Whitlam’s 1972 election campaign, he announced that his government would grant land rights to Aboriginals, but once he became prime minister, it was discovered that Whitlam’s promise of land rights only applied to Aboriginals within the Northern Territory (Foley, 2011; Nicoll, 2014). This move was backed by the claim that the Northern Territory was the only state within the Commonwealth jurisdiction and therefore all other Aboriginals governed by state jurisdictions were left behind (Foley, 2011).

This brings to light the question; what did the Aboriginal Embassy actually achieve? Well, despite being somewhat unsuccessful in achieving the aims set out by the Embassy, they are still seen as being successful for bringing to light the issue of land rights, and keeping it on the agenda with Australian political parties (Robinson, 1993). The embassy was also successful in removing the McMahon government, which began the journey towards The Northern Territory Land Rights Act, 1975; which although it was not achieving one of their aims in its entirety, was still part of their plan. The erection of the protest in itself is also a great success for Aboriginal people, and Australia, through their ability to express their constitutional rights through the camp, due to a legal loophole that allowed their indefinite stay on the site. Despite the cruelty and political resistance the protestors experienced, enforced by police, for many years, the embassy stood strong. (Robinson, 2014)

In many ways, the story of the Aboriginal Embassy is one of success. The embassy drew focus to the failing of the government upon the election of Whitlam (Harris & Waterford, 2014). They created a legend of both political and historical importance for Aboriginal people. Even though the embassy’s central claim for land rights was only partially fulfilled by the Northern Territory Act of 1967, the length of embassy’s fight for the return of land, as well as for Aboriginal self-determination, demonstrates its cardinal significance in the history of contemporary Aboriginal politics (Iveson, 2014; Nicoll, 2014; Watson, 2014). The embassy was also successful in establishing a much-needed sense of power, by acting beyond the expected bureaucratic system of negotiation and compromise (Muldoon & Schaap, 2014; Watson 2014). The activists drew attention to the fact that without land rights, they and all Aboriginal people, were to be made to feel like outsiders. The establishment of a ‘tent’ embassy also brought to light the quality of living conditions of Aboriginal people all over Australia (Iveson, 2014). It is because of all these acts that the Tent Embassy movement was such a landmark event in Australian and Aboriginal history.

The notion of land rights for Aboriginal people is an ongoing struggle, even today. Although injustice for Aboriginal people still exists, the movement of the Tent Embassy set in motion a series of events to begin the journey towards fair treatment of Aboriginal and Torres Strait Islanders, as well as much-needed healing. The demands made by the Aboriginal Embassy, which are still being fought for today, are not unreasonable by any means. They are purely advocating for the right to fulfil the needs of Aboriginal people, in the sense of their connection to land and water, whilst still allowing the preservation of many non-indigenous built areas. There is still a long way to go when it comes to righting the past wrongs; however, it is events such as this one that ensures that the public is kept aware of the issues that Aboriginal people are facing and struggling with. It is this awareness that inspires movement and change within society, to a better future and existence for all Australians, both indigenous and non-indigenous.

Blasphemous or Brilliant? How ancient rebellion shaped the modern era of religion.

Religious practices did not start when Jesus Christ was born. Many people in modernity share this misconception.  It started years before that, within ancient civilizations. Many of the modern era’s religious traditions, beliefs and practices derive from antiquity, hundreds of years before the birth of Jesus Christ. Some of these similarities come from leaders or figures of antiquity who rebelled against their society’s religious values. Akhenaten’s “Great Hymn to the Aten” and the Apology of Socrates have obvious superficial differences, however they both prove how each great leader’s rebellious thinking shaped modern religion.

Akhenaten reigned over Egypt from around 1353-1336 BCE.  During his time as pharaoh, he radically changed traditional Egyptian religion. Preceding his rule, Egyptians’ religion was largely polytheistic. Akhenaten soon banished all religious traditions, and instituted the first recorded monotheistic religion. He believed that the sun god, Aten, was the one and only deity. In his “Great Hymn” he writes “O sole god, whom there is no other,” (Akhenaten 5). Akhenaten is stating his belief that there is only one God that rules, rather than traditional Egyptian belief that there are many. Akhenaten also believes that Aten rules over “the countries of Syria, Nubia, (and)the land of Egypt,” (Akhenaten 5). In antiquity, each country and society believed they had their own god or set of gods, however Akhenaten believed his sole god ruled over all. This idea that there is one god that is regnant over everyone is identical to the beliefs of modern day’s Abrahamic faiths. After his death, the succeeding pharaoh condemned him as the “heretic king” and believed his beliefs and memory needed to be eradicated. Historians, however, have commended Akhenaten’s reforms as being the first instance of monotheistic religion. Some have gone on to link aspects of Jesus Christ’s relationship with God to be similar to Akhenaten’s relationship to Aten. Abrahamic religions in the modern era practice with the same ideas that Akhenaten had thousands of years before Jesus Christ was even born. The religion Akhenaten initiated, which was overthrown after his death, was based on the worship of the same Holy Father that all Abrahamic believers follow today.

There is no argument that Socrates was one of the greatest contributors of intellectual development to ever live. Without Socrates, all of history and the modern era would be profoundly different, but many of his ideas at the time were widely viewed as defamation. Ancient Greeks who lived among Socrates worshiped a group of twelve gods that they believed lived on Mount Olympus in Greece. These gods strongly affected their daily lives; they held daily worships, gave offerings constantly, and believed the way to live a fulfilling life was by serving and pleasing the gods. Socrates spent his time challenging thought and questioning ideas rather than worshiping the gods. The Oracle of Delphi went to Socrates and told him he was the wisest man of all, which he did not believe. This made him question the idea of wisdom, and what makes a man wise. Socrates was charged impiety, meaning disregarding the pantheon of Athens. The Athenian government believed that Socrates ignored religious practices and did not seek to honor the gods. Plato writes his rebuttal of the charges in the Apology of Socrates; “but necessity was laid upon me- the word of the divine…(to) find out the meaning of the oracle,” (Plato 17). He believed that he was not guilty of impiety because he was following the word of the Oracle of Delphi. He spent time doing what he believed was his mission from the gods; examining people and convincing his fellow citizens that the most important thing as humans is virtue, and doing what is right. This is a major theme in religions like Christianity; the idea that the way to live is by blindly following God. Christians, among other religions, believe strongly that God has a path for everyone, and Socrates claimed he was trying to follow his. In his apology, he also makes a point to say “this occupation quite absorbs me… (and) I am in utter poverty by reason of my devotion to the divine,” (Plato 17). Socrates did not center his life around pleasing the gods in exchange for wealth, which was tradition in ancient Greece. Socrates believed that wealth was not nearly as valuable as having strong values and a healthy soul. Many modern religions focus on the same idea. The Bible states that “a man’s life does not consist in the abundance of his possessions,” (Luke 12:15). This is an idea that Socrates is harvesting in ancient Greece, close to 400 years before the New Testament was written. The trial was over, and the court sentenced him to death.  Socrates did not purposely defy ancient Greek religion, nor was he ever clear on his true religious stance. Socrates, however, had an unorthodox view and practice of religion for his time and was sentenced to death because of it.

Akhenaten and Socrates were superficially different in many ways, but they shared common values. Their ideas and practices of religion were seen as blasphemous during their lifetimes, but are now customary belief in many modern religions. Akhenaten is the father of monotheism. He holds the record of being the first to believe that the one god rules over the whole universe. After Akhenaten’s death, he was referred to as heretic. Statues and memorials built of him were torn down, as the new pharaoh attempt to erase his legacy.

Akhenaten was the pharaoh and made everyone he ruled over believe in his monotheistic view while Socrates was more of a lone wolf of his time. Socrates set out on a mission he believed his god called him to do. Socrates did not worship the gods to get gratification or wealth, he followed his god in order to gain virtue and do what is best. His ideas and actions were punishable by death in his era. Socrates and Akhenaten were chastised for their defiant thought, but both of their ideas are apparent in the modern era’s religious practices.

Both Akhenaten and Socrates, however, rebelled against common religious belief of their time, and were condemned for it. Neither of them worried about being judged or consequences they would face, but instead believed in a stronger, higher power. Together, but separately, Akhenaten and Socrates shaped major aspects of modern day religions.

The Secret History of 9/11 (documentary, reflective)

The Secret History of 9/11 was a documentary that chronicled the events leading up to the terrorist attacks on 9/11 and what actually happened on that fated day.  I viewed the comprehensive film and was intrigued about several of the facts discussed in the movie.  After viewing the reflecting the informational film, I thought that the reasons for the terrorist attacks, the mistakes that prevented the terrorist attacks on 9/11 from being stopped or lessened in their severity, and the actions of President Clinton and President George W. Bush during the time period around and during 9/11 were interesting.

One aspect of the documentary The Secret History of 9/11 that I found interesting were the reasons for the terrorist attacks.  The messages those responsible wanted to be conveyed through those attacks could easily have been communicated without any deaths.  In the documentary, the narrator states the Ramsay Usaf, the main person behind the first bombing of the World Trade Center, told The New York Times that, “ . . . the bombing was in retaliation for American support of Israel, and oppression of the Palestinian people” (The Secret History of 9/11 5:55).  If Usaf and his associates wanted America to stop supporting Israel and end Palestinian oppression, a violent attack that killed six people and injured over a thousand hardly correlates with what they wanted.  Though these terrorists did want to spread fear and chaos, the results barely helped the Palestinians.  This is not to say that the terrorists had good intentions, rather, had they considered their options and realized why the United States supported Israel over the rebel Palestinian state, they could have realized that their attacks would only worsen America’s position on the Palestine, in their point of view.  Because of the 9/11 attacks, the U.S. also strengthened airport security and brought a military presence to Afghanistan, a fact that was intriguing because these results led to Bin Laden’s death and the near end of the Taliban.  One aspect of the documentary The Secret History of 9/11 that I found interesting were the reasons for the terrorist attacks.

Another aspect of the informational film that I found interesting are the mistakes that prevented the terrorist attacks on 9/11 from being stopped or lessened in their severity.  An example of one of these surprising mistakes was the fact that the CIA withheld information that, if released, could have prevented the 9/11 bombings.  The Central Intelligence Agency did not share the identifications of two suicide bombers, Khalid Al Mihdhar and Nalaf Alhazmi, with the FBI or any other branch of government that could have ordered a warrant for them.  In the documentary,  “When the president was made aware that night that there had been a mistake between the FBI and the CIA involving their sharing of information, he had the same attitude that I [Richard Clarke, Chief of Counter-Terrorism] did, which was outrage” (1:21:49).  If the CIA had the presence of mind to inform the FBI about the two terrorist suspects who were let in the U.S. with incomplete identification, the attacks on 9/11 could have been prevented.  Though these agencies are known for their efficiency, the information was revealed too late.  In addition, the fact that the phone lines were down and the White House could not be reached by the president on 9/11 intrigued me.  It is even noted in the documentary that those phone lines were always secure and open before that point, but even with a borrowed cell phone, Bush could not reach Washington to relay orders concerning the terrorist attacks. Another aspect of the informational film that I found interesting are the mistakes that prevented the terrorist attacks on 9/11 from being stopped or lessened in their severity.

The final aspect of The Secret History of 9/11 that I thought was interesting were the actions of President Clinton and President George W. Bush during the time period around and during 9/11.  In both presidencies, the commander in chief did not act on or acted too late on the growing threat of al-Qaeda, the capture or killing of Bin Laden, and the numerous warnings and advice from the Chief of Counterterrorism, Richard Clarke.  In the documentary, it is stated that, after  George W. Bush was sworn in as the president, former president Bill Clinton told him that “ . . . not catching or killing Bin Laden was one of the greatest regrets of his presidency”  (33:45).  Though the former president warned Bush about al-Qaeda and its leader, “ . . . for the Bush administration, al-Qaeda was a low priority” (35:02).  After, Richard Clarke advised the president to take further action against the terrorists, but no great action was taken until after September 11th.  This was of special interest to me because it is relatable to today’s threat of ISIS, and the fact that former president Barack Obama failed to end that threat, so it must be dealt with by President Trump.  It remains to be seen, whether, like Bush administration, action will be delayed until it is too late. Also, it was interesting that Bush stayed at the Florida school he was visiting on 9/11 even though he received information about the attacks.  If he had not delayed, it is conceivable that more immediate action would have been taken to stop the third hijacked plane from crashing into the Pentagon.  Though he later said that he was trying to project an image of calm, that image only used up time that could have been used to take measures to stop the planes.  The final aspect of The Secret History of 9/11 that I thought was interesting were the actions of President Clinton and President George W. Bush during the time period around and during 9/11.

Three details I thought were thought-provoking in the documentary film The Secret History of 9/11 were, the intent behind the bombings, the miscalculations that led up to 9/11, and the measures took by the two successive presidents during the movie’s era.  From the first bombing of the World Trade Center to the end of the Bush administration after 9/11, the entire film is detailed and moving.  Indeed, this documentary was also informative, interesting, and provided the viewer with a near-complete view of the secret history of 9/11.

Works Cited

cjnewson88. YouTube, YouTube, 12 Mar. 2013, www.youtube.com/watch?v=MVh9WgGxuIY. Accessed 19 Sept. 2017.

Is race biological or social?: essay help online free

Race, so seemingly simple but so intricate and personal, too. It appears humans of all time periods and genetic decent have been fighting over race, but is there any reason to? This question is something that has been discussed by Anthropologist for ages, and they have quite a few different answers. Some aspire to the idea that “Racial experience is real, and human biological diversity is real.” (Torres Colón, G. A., 2015). Others, disagree and say no, “race is real, but the authority science has been assigned as it pertains to race has been misappropriated.” (Simon). And all the while, some admit that race it exist, but it has limits and is “only skin deep” (Cassata). There are many more ideas on race as an anthropological subject, but the central argument lies around the question of its construct, Biological, or Social?

Jada Torres and Gabriel Colon from Wayne’s University authored a peer review on Racial experience and Biological diversity, they came to the conclusion that although race is not a biological construct, but rather a social construct, has real social consequences. They went even further to say that the even more pressing issue is that race is “cultural concept that contains implicit and explicit understandings of how collective bodies differ.”  I agree with their theory, if you think of race as an experience rather than a box that you are put into at birth many problems relating to racial injustice can be avoided.

The author of the second article, Simon Mashuan, is a reporter for NBC news, it’s important to remember that NBC news is a for profit organization, meaning that every story they put out is attached with a specific set of values, motives, and agenda. This article is not a peer reviewed journal like the first which means its contents cannot be taken as scientifically sound, however Mashuan based his information off an interview with University of North Carolina Anthropologist, John Marks. The main point that Marks is trying to convey is that he views race as a bio-cultural construct, stating that there is much more to race than simply the color of your skin. Marks asked goes on to compare the relationship hominoid in East Africa, may have to a hominoid in West Africa, saying “An East African is more closely related to a West Asian than to a West African; geographic distance is the main determinant of similarity in the human species.” (Marks,2017).  Simply because two people may have ancestors from Africa does not mean that they descended from the same line of hominoids. Marks describes anthropologist work as “an unusual burden as the custodians of the scientific narratives of who we are and where we come from” he goes on to describe that those narratives have cultural power and people feel like they own them, this is why he studies race.

The final article is brought to us by an author who specializes in health and wellness from a magazine by the name of Healthline. Cathy Cassata, the author, claims that race truly is only skin deep, making it a social construct. Cassata states that racial categories account for only six percent of human variation and diversity. She goes further to explain that race in humans is not organized into discrete boxes but is instead continuous; Clines are the proof that provide evidence for that claim. A Cline is a gradient line that can best be visualized by picturing the temperature gradient of earth, at the tops and bottoms are the poles which are cold, and towards the middle at the equator the temperatures are much hotter and more tropical. Race can be viewed on clines too. Cassata finishes her point by pointing to the fact that

“Even in our evolutionary past, our earliest modern human ancestors in Europe and parts of Asia were exchanging genes with related human populations that existed at the same time. Mixing of genes and gene flow and spread of genes and population expansion is something that is literally as old as human history itself” (William R. Leonard, PhD Northwestern University)

What Leonard is saying here is that humans have been procreating between populations for as long as history itself, there is no such thing as a pure genetic build. Leonard says race is in fact, only skin deep, making it a social construct.

There are many other arguments as far as how we can categorize race but many anthropologists ascribe to the belief that race is something that is strictly a social construct, bearing social consequence.

Product quality and sales service quality – PNJ stores

The aim of this study was to identify the relationship between customer satisfaction and service quality and product quality of PNJ stores in HCMC. Besides that, it tries to examine the impact of four factors (tangibles, assurance, empathy and price) as well as the factor of service quality and product quality on customer satisfaction. As a result, the empirical information found in this thesis provides a general view about assessment of customer about product quality and service quality of PNJ stores in HCMC in order to make appropriate adjustments and effective improvement for running a good business.

Based on the results of the path analysis exploring the direct and indirect effects of four independent and two dependent variables on customer satisfaction, this study argues that in order to achieve high customer satisfaction, PNJ company should have high level of service quality, product quality, better tangibles in stores, better assurance and empathy to customers and better price. Moreover, from the result of this research, the findings indicate that the factor of empathy plays crucial role for customer satisfaction.

In conclusion, from the experiences in process of conducting research, it points out limitations of this study and provides helpful recommendations for further dissertation. And according to the research findings, this study gives some recommendations to PNJ company in order to take adjustment and improve the level of service quality, product quality and customer satisfaction.

CHAPTER I

Introduction

1.1 Background

Jewelry industry of Vietnam is one of the developing sectors of Vietnamese economy. And most of enterprises in the field do not have the organizational system. There are more than 12,000 business enterprises in the industry, most of them are small jewelry store and the rest are a number of organized retail business. Domestic jewelry market develops in all segments, including gold, platinum, silver and precious stones, where gold is the preferred choice of most customers.  According to Updating Report of Sacombank-SBS in first quarter of 2013, from 2005 to 2011, the value of gold jewelry in Vietnam compounds annual growth rate of 6-8%, increases from 399 million USD to 634 million USD. In 2011, the demand for gold jewelry in Vietnam increased by 14%, which is 634 million USD, accounting for 13% of total gold demand. Jewelry market decreased 10% to 13 tonnes in 2011. The increase in gold prices helped to increase the value by 14%, even if production reduced by 10% in the same period.

In Quarter 1/2012, production of consumed gold jewelry fell by 9% to 5 tons, the domestic gold price increases from last year which leads to decrease in demand because customers have to face with high inflation. Gold jewelry demand in Vietnam in quarter 1/2012 has made positive changes, with the value of 269 USD million (up 9.8%) and mass consumption of 5 tonnes (down 9%) .

In the future, growth rate in Vietnam jewelry market  is expected to be positive, which is driven by the increase in domestic demand and income. And as a result of this trend, customers with higher income will have higher requirements. For jewelry products industry, this is an opportunity as well as a challenge. Now that more and more enterprises start this kind of  business with products of competitive quality and price, how can old enterprises attract more new customers and retain the old ones? The enterprises should not only concentrate on improving products quality and design, they should also attach special importance to enhance service quality in their stores.

In order to achieve this, a method should be usually implemented is to do research about customer satisfaction about product quality and sale service quality of their distribution system. By doing these studies, the companies can have more knowledge about customer desire and evaluation, so they can give methods to improve the quality of product as well as service, satisfy their customers in best ways, and contribute to gain customer loyalty.

1.2 Problem statement

Many researchers have looked into the importance of customer satisfaction. Kotler (2000) defined satisfaction as: “a person’s feelings of pleasure or disappointment resulting from comparing a product’s perceived performance (or outcome) in relation to his or her expectations”. Hoyer and MacInnis (2001) said that satisfaction can be associated with feelings of acceptance, happiness, relief, excitement, and delight.

In addition, to be a strong brand with high reputation and credibility, Phu Nhuan Jewelry Joint Stock Company (PNJ) is considered to be one of the enterprises with significant contributions to the growth of the jewelery industry in Vietnam. It can be seen as the market leader among domestic jewelry company with 20% gold jewelry market share and upto 70% silver jewelry market share in Vietnam. Hence, to keep the leader position in jewelry industry in Vietnam, research on customer satisfaction on product quality as well as sales service quality is a necessary method to attract more new customers and retain the existing ones.

Thus, I would like to do my thesis about ‘Evaluate customer satisfaction on product quality and sales service quality of PNJ stores in Ho Chi Minh City’

1.3  Introduction of Phu Nhuan Jewelry Joint Stock Company

Phu Nhuan Jewelry Joint Stock Company is a firm operating in fields of verifying diamond and gemstone service; manufacturing, trading gold, silver and gemstone jewelry, gold bullion; house renting according to real estate trading law.

1.3.1 Foundation and development history

In April 28th, 1988, Phu Nhuan Jewelry Trading Store was founded with an investment of only VND 14 million – equivalent to only 9.0 ounces of gold at that time – and its first 20 employees.

In 1990, this founding store became Phu Nhuan Jewelry, Fine Arts and Currency Exchange Company, being under direct control of Financial Administration of Ho Chi Minh City Committee. Phuong Hoang Gold Bar was also launched then.

In 1992, the company was renamed Phu Nhuan Jewelry Joint Stock Company. This stage witnesses great changes with bold investment in Italian technology production line. In the same year, the company also co-founded Dong A Bank and formed a joint venture with Phu Nhuan House Trading and Development Company.

Mutually antagonistic nationalism between China and Japan: essay help site:edu

The rising mutually antagonistic nationalism between the two countries cause China and Japan to have contrasting perspectives and thus view the current issues between them through different lenses. This will continue to undermine bilateral relations and the possibilities of future collaborations between the two countries due to the nationalistic pressure of the countries’ people. Historical events like the second Sino-Japanese war has caused animosity and mistrust between the two countries. It was exacerbated in 2001 through the politically driven visits to the Yasukuni Shrine by the Prime Minister of Japan, Junichiro Koizumi, despite China’s constant disapproval which cause bilateral relations to deteriorate greatly. Events like this reopen historical war wounds and thus cause a deterrence for the development of Sino-Japanese relations. However, other factors like territorial disputes and assertive actions are also indispensable as the obstacles to improving Sino-Japanese relations. Despite the considerable significance of nationalism, the effects of territorial disputes, historical differences and assertive actions also plays a big part in the improvement of Sino-Japanese relations. However, territorial disputes, historical differences and  assertive actions stems from the same source which is the mutually antagonistic nationalism each country has for each other. Furthermore, the effects of mutually antagonistic nationalism greatly magnifies the problems of  current issues. Thus, mutually antagonistic nationalism has been the key obstacle to improving Sino-Japanese relations since 2001.

Overcoming historical differences hold great significance as it is a latent problem for the development of Sino-Japanese relations. Japan’s stance for its relationship with China to improve is for the latter to not dwell on their past where Japan has been aggressive towards them as seen in the protest of China’s application to put the Nanking massacre as well as the “comfort women” in the UNESCO programme. On the contrary, China wants a sincere apology as well as for Japan to take full responsibility for past war crimes towards China as seen in their repeated reminder to Japan that without it the relationship between China and Japan would be kept stagnant. This has limited the development of their ties as both countries do not want to be the first to reconcile with the other as they have different viewpoint on who is in the wrong. This can be further supported by the Japanese Government authorising the Atarashii Rekishi Kyokasho, which is a history textbook reform, that led to the numerous amount of  anti-Japanese mass incidents in 2005, this shows how the difference in the perceptions of their shared history can worsen relations between the Chinese and Japanese. Displeasure between the people due to their rough past can be seen in the form of these mass incidents which would greatly erode the opportunities to form a better relationship between the two countries. Hence, overcoming historical differences will aid in the relationship between China and Japan.

Furthermore, historical issues have also cause the relations between political leaders to be eroded due to their use of the different perception in history as a tool to garner more support for their political expediency. Koizumi’s visit to Yasukuni Shrine on 13 August 2001 has eroded his relationship with China as it contains the remains of fourteen Class A Japanese war criminals. On top of that, the Japanese Premier was to visit the Yasukuni Shrine every year during his incumbency. However, his visits was beneficial to Koizumi’s political support as seen in his landslide victory in the 2001 elections. Thus, political leaders would be able capitalise on the difference in their shared history to gain more political support from its people due to their strong nationalistic sentiments. Premier Li Ke Qiang’s firm warning on 4 March 2014 that China would not allow anybody to “reverse the course of history” shows the severity of the issues bought up by misinterpretations of shared history and the erosion of relation not only between the countries’ people but also their respective political leaders. Thus, solving the issue of historical differences will be beneficial to Sino-Japanese relations.

However, the evidence and scholars that have posited the fact that historical difference is the primary obstacle for Sino-Japan relations have failed to provide an in-depth look into the problems related to historical difference due it’s complexity. As the governments of both countries have to consider nationalistic pressures from their citizens to keep their political expediency which has deter further improvements of their relations. According to Gries, the relations between China and Japan has deteriorate more due to the political leaders using different perceptions of shared history to brew nationalistic animosity to gain political support. Thus, problems caused by different viewpoints have only arise due to the political leaders’ use of the strong mutually antagonistic nationalism as a tool for political support. It is also due to the instability like mass incidents cause by the respective country’s people due to their mutually antagonistic nationalism that has further dampened the efforts for the development of Sino-Japan relations.  Thus, the effects of historical difference have been amplified by the mutually antagonistic nationalism of the citizens. According to He, the different interpretation of their common history, had acted to amplify the friction manifested through nationalistic sentiments. Thus, the problem between the difference in viewpoint of the countries’ people only grew due to their mutually antagonistic nationalism. Furthermore, with the Chinese people’s impression of Japan’s lack of remorse for past aggression like the Nanking massacre has caused the rise of anti-Japanese sentiments which pressures the Chinese government to dwell on historical scars, thus deterring the improvement of Sino-Japanese relations. Similarly, the Japanese government also experience the same pressure from their anti-Chinese citizens to not engage in any type of conciliatory measure for past incidents. Showing the significance of the mutually antagonistic nationalism in limiting the government’s decision and ability to reconcile and build new ties with one another. Thus, the root cause of the problems arising from the different views in their shared history is due to the nationalistic sentiments of both countries as it acts as the catalyst that greatly amplifies the problems of historical difference. Therefore the nationalistic pressure from the citizens is the foremost considerations in improving Sino-Japan relations.

Meanwhile, territorial dispute is another paramount issue that will undermine further development of bilateral ties between China and Japan as tensions are rising mostly due to the dispute over the East China Sea. In November 2004, a Chinese submarine was detected in Japan’s Exclusive Economic Zone. That incident was led by the dispatch of Chinese drilling teams in search for oil and gas deposits in Japan’s Exclusive Economic Zones. In response to this incidents, China was formally declared a security concern for Japan in its December 2004 National Defence Program Outline. According to Yuan, both China and Japan are not willing to give way on competing for the claims due to the geo-economic importance leading to the build-up of tensions. Smith further support this view by emphasising the attempts of both governments to establish sovereignty over the Islands through diplomatic countermeasures, militarized threat and economic sanctions, which do not essentially support the development of relations. The Senkaku Islands are close to important shipping lanes, offer rich fishing grounds and lie near potential oil and gas reserves thus it holds great importance for both countries as these resources and shipping routes would bring great economic benefits to the country that has the Senkaku Islands. This increased of military presence like the Air Defence Identification Zone (ADIZ) that challenge the sovereignty and control of Japan on the Islands has increased the possibility of armed conflict and naval battles, threatening the relations between the countries. Economic ties are also at stake due to territorial disputes as economic sanctions are being implemented due to the breach of sovereignty rights. Which is evident in the case where China set up trade embargos on rare earth materials to Japan after the 2010 confrontations. This has not only affected the imports and exports of goods but also eroded other sectors like tourism due to the display of nationalisation of the Islands in 2012 where 50,000 Chinese tourist cancelled their trip to Japan. Thus, the fight for the Senkaku Islands have affected the economic ties between the two country eroding further possibility of any collaboration between the two countries.

Besides Economic problems, political relations was also eroded due to the persistent territorial disputes, as seen in the harsh response of China towards Japan’s arrest of the crew of a fishing boat, which included the detainment of four Japanese citizens for entering restricted military area and disruption of bilateral negotiation. Furthermore , Chinese Premier, Wen Jiabao delivered a speech which emphasised the fact that China “will never budge, even half an inch, over sovereignty and territorial issues.” The events mentioned above are signs of the limited cooperation and poor communications in solving territorial disputes further eroding the relations of China and Japan. Therefore, territorial disputes seems like the main cause for the stagnations of the development of ties as there are limited space for negotiations on both side thus coming up with an mutually agreeable resolution is almost impossible.

On a closer look, we should not dismiss the role that opposing nationalism plays as government persistence on the issue of territorial dispute is built on nationalism and pride of the country as if in the case of the Senkaku Islands, cooperation with the other country would be a better approach to reap economic benefits rather than fighting for full sovereignty rights. Stated by Zhang, the governments of both countries would take a harsher stance due to the nationalistic public sentiments as the soft measures would lead to a fall in the support of the ruling party threatening their legitimacy. This is evident, in the hard-line stance of both countries on territorial dispute such as the greater emphasis on the sovereignty over the Senkaku islands due to the nationalistic pressures.  Showing the significance that nationalistic pressure has amplified and further escalated the fight over sovereignty rights of the Senkaku Islands. Induced by the angry Chinese public openly showing the disapproval of Japan by destroying their factories and rioting, the Chinese government had to introduce naval presence in the area further escalating tensions. This brings out the fact that both governments are afraid of anti-government sentiments due to the failure of protecting the countries’ territorial sovereignty thus would much rather create tension and disrupt the development of ties instead of eroding their own political stability. Therefore with the main priority of the government being the protection of its legitimacy as it is the foundation for most of the Chinese government’s policies, thus nationalistic pressure greatly influences the decisions made by the government in terms of territorial disputes, and it is the main obstacle for the development of Sino-Japan relations.

Assertive actions to prove and strengthen their political power creates the hostility between the two countries thus eroding bilateral relations. Both China and Japan are known as the great powers in Asia, thus there has always been and underlying dispute over who is the stronger of the two. As China has risen in their power capabilities, naturally it has been asserting more security based interest in East Asia, however as Beijing pressed for its position, the Japanese government has tried to go against it. This is evident in the Chinese policies towards Japan between 2001 and 2007 as it shows the growing military capabilities of China. This shows the growing Chinese power and influence over Asia and the world which would erode the strength and influence of its rival, Japan. This can be further supported by the fact that there has been a change in Chinese policy in the beginning of 2008, where China has become more assertive leading to the deterioration of bilateral ties. Thus proving the point that assertive actions in terms of China’s policies towards Japan has been a huge force in the deterrence of Sino-Japanese ties. Further evidence of China reducing Japan’s influence and power can be seen in Chinese Premier Wen Jiabao’s implications that China was opposed to Japan’s candidacy for a permanent seat in the UN security council. Japan has also expressed its worries of China’s assertive actions in the decision to mention the Taiwan Issue in the joint statement at the end of the consultations between U.S. and Japan in February 2005. Thus, the rise in assertive actions in the area has made to two countries increasingly wary of each other which does not provide for a good foundation for the building of Sino-Japanese relations. Therefore, to aid in further development of their relations, both government has to lessen its assertive actions towards each other as it would create animosity and hostility between the two country.

However, mutually antagonistic nationalism is the main source for the increased assertive actions as well as the hostility between the two countries due to the desire of the people to be better off as compared to each other. As stated before, both the Chinese and Japanese have engaged in a competition with each other and thus it has significantly eroded the possibility of a good Sino-Japan relations due to the hostility between the countries’ people. This can be seen in August 2004 where there was a full scaled rioting by angry Chinese soccer fans following the Japanese team’s victory over China in the finals. This shows the strong mutually antagonistic nationalism that the Chinese has against the Japanese thus fuelling tension and hostility between the countries. Furthermore, the government’s need to assert its dominance and strength is also fuelled by the nationalistic sentiments of its countries. The development of ties weigh heavily on the mutual perception of the people as any nationalistic sentiments would lead to the adaptation of harsher approaches and attitude in both states’ foreign policies.  As the mutually antagonistic nationalism grows between the countries’ people, the government would be forced to take a harsher stance and assert its dominance over the other country. Thus, assertive actions between the country is to satisfy the need of the people’s mutually antagonistic sentiments leading to increase hostility between the countries, eroding further development of their ties.

While there may be a huge array of problems that will undermine the long term relations between Japan and China, they are purely just problems that came from the same root cause which is the mutually antagonistic nationalism. Even though, on the surface historical differences, assertive actions and territorial disputes are seen as the  most prominent problems, the underlying problem of the countries’ mutually antagonistic nationalism has a greater impact on the ability to further develop Sino-Japanese relations thus solving mutually antagonistic nationalism will solve the root cause of the problem and therefore eradicating the effects of the other problems. Furthermore, the opposing nationalism also amplifies the negative impacts of the current problems and thus have been a huge contributing force to worsening relations of Japan and China. All in all, the primary issue that will affect and shape bilateral relations is the mutually antagonistic nationalism. Thus, mutually antagonistic has been the key obstacle in improving Sino-Japanese relations.

Day of the dead: college application essay help

Day of the dead is a holiday closely related to Halloween and All Saint’s/Soul’s Day. This holiday is celebrated from October 31st to November 2nd in Mexico and in some places in the United States. Día de los Muertos is specifically celebrated in the states from Mexico City south. This includes Michoacan, Mexico City, Puebla, Oaxaca, Veracruz, Guerrero, Guanajuato, Chiapas and the Yucatan. Northern Mexico, is not as celebratory, at least not the way the South is. The people of the Northern part of Mexico can be seen going to mass and visiting grave sites, while the people of the South can be seen building ofrendas, throwing wild parties and leaving out offerings for their ancestors and family members who have passed on. Interestingly enough, Latin America and the Latino parts of Los Angeles, California also take part in the festivities of the Day of the Dead.

What is an Ofrenda you may ask? An ofrenda is a huge part of the Día de los Muertos celebration. Ofrenda means offering in Spanish, and they are also called altares or altars in English. The ofrendas are not to be worshipped though, most of the Mexicans celebrating this holiday are of Catholic faith.  The ofrendas are created to honor the memory of their dead relatives. Ofrendas are complex and time consuming to set up; however, the effect of a finished one is wonderful. Ofrendas can consist of many layers, there is usually a crucifix on the top level, then a lit candle is set out for each deceased relative.  Flowers, salt and water, incense (or copal), sugar skulls, and tons of food are also set out onto the ofrenda.

On November 2nd- when the adult spirits are said to come down to earth- people bring their celebrations to the cemeteries and grave sites. People clean tombs, leave flowers, play cards, listen to music, and remember their loved ones.  Some also drink tequila, and sing along to the mariachi bands.

Foods play a huge part in every culture, we Americans have our apple pie and hamburgers but what does Mexico do? For Dia de los Muertos, there are several specialty foods. According to ocweekly.com , Pan de muerto is probably the most recognizable food in the celebration. “The most common culinary representation of the Day of the Dead is an eggy, brioche-like bread, often topped with sugar.” Some Pan de Muerto is often accented with skull and crossbones and other shapes. Mole is also a big food of the Dia de los Muertos celebrations. The mole is a huge undertaking as it has anywhere from 20 to 50 ingredients. Foods such as tamales, atole, and candied pumpkin can also be seen in the Day of the dead festivities.

Music is self expression, but you may be surprised to hear what music is played on the Day of the Dead, and you may be surprised to learn that music in different cultures isn’t all that different after all. For example, in 2015 one of the most played radio songs was the salsa version of Thriller as a tribute to Michael Jackson. Another, is La Llorona by Chavela Vargas. And according to Billboard.com, No es serio este cementerio by Mecano is a popular ‘80s Spanish pop song that can turn a graveyard visit into a dance party. Shakira’s “She Wolf” is a favorite in the US, Mexico, and other Hispanic countries.

Marigolds are a key element of Día de los Muertos. Marigolds have a long history in Mexico. They were brought over the Atlantic to Mexico hundreds of years ago. Aztecs used these hearty flowers for herbal medicine and decoration. Now, Marigolds are place all around ofrendas to guide the visiting spirits because of their bright colors and strong scent. Marigolds also represent how fragile life is. Marigolds are known as cempasúchiles or flowers of the dead, which is definitely appropriate for the Day of the Dead.

José Guadalupe Posada has heavily influenced today’s Day of the Dead artists. Posada was born in 1852 in the Mexican town of Aguascalientes and he started started studying art at the age of 18, skip a few years ahead and we find him doing print after print and painting after painting. In his lifetime he created over 20,000 images, however; he died as an impoverished man in 1913. His most famous works include Calvera: Guerra Mundial or Skeleton: World War in English, and Calvera Catrina or Dapper skeleton.

Day of the Dead and Halloween share many characteristics. Day of the Dead and Halloween are both celebrated in the US and in Mexico. Day of the dead is celebrated in Los Angeles and Halloween is celebrated pretty much everywhere in the US. Halloween has picked up popularity in Mexico in the past 40 years. They are also similar because of the lavish celebrations and decorations, they are also both of European origin. They are different because they have different mentalities and roles. Halloween is a Holiday for children to dress up, have fun, and get free candy. Adults dress up, go to parties, and hand out candy. Day of the Dead, as mentioned earlier, is to honor dead ancestors with ofrendas, food, and grave visits. Although they are similar in some ways they are also very different. Every culture has its own celebrations and it’s clear that Día de los Muertos is a very lavish and unique holiday that would be very cool to see in person.

Sigmund Freud – career, theories, legacy

Sigmund Freud was born in the Czech Republic in 1856. He is an Australian psychologist who is known for the development of techniques and theories of psychoanalysis. He developed psychoanalysis which is a method through which other analysts have been able to unpack conflicting unconscious basing them on fantasies, dreams and free associations of the patients (Freud 23). Some of the most influencing academic contributions that he has made include libido and the age, child sexuality among other topics that he developed in the 20th century.

Freud’s father, referred to as “Jakob” was a wool trader who had been married to another wife before he wed Freud’s mother (Jay1). The father who was 40 years when Freud was born can be described as being rare and authoritarian while the mother was emotionally present.

Although Freud had other brothers, he was not closely attached to them but was more attached to his nephew John, who provided the intimate friend and also hatred rival that reproduced in the later stages of Freud’s life (Jay 1). For example, the sensitivity to perianal authority within such that he later talked about in his theories and work was mainly stimulated by the decline in power that his father suffered in his generation. The father suffered this ion the liberal rationalists who lived in the Habsburg Empire. It is also believed that the interest that he had in the theme of the seduction of daughters was based in the complicated ways in the context of the attitude that Viennese had towards female sexuality.

When Fred was four years, his family relocated to Vienna where he lived and worked for the most part of his life. He started studying medicine at the University of Vienna, and he had a medical degree in 1881. After graduation, he worked as a doctor at the Vienna General Hospital.  He got engaged, and his marriage led to six children in which his last born Anna became a distinguished psychoanalyst.

In his career, Freud viewed himself as a scientist most and not a doctor. Many people thought that he was a doctor, but he took more of his time on science and research. He endeavored through this to make understanding of human knowledge and the experience that human went through his journey of development.  The movement of the family from Freiburg was mainly due to economic reasons. Despite the dislike that Freud had to the imperial city, he was forced to become part of it.  It is also from the city that most of his thoughts and the arguments on the theories that he developed at a later part of his life emerged.  They were mainly encouraged by the political and the social situation in the city.

His career and development of theories

Earlier in his career, he was mainly influenced by the discoveries and the works of his friend, Jose Breuer. Breuer had made a discovery that through the encouragement of a hysterical patient to talk about the earlier experienced and the symptoms that he had seen, these symptoms sometimes abated. This discovery encouraged Freud and he posited that the neurons of the patient in these situations had their origins in the traumatic events and experiences that the patient had gone through. As part of the treatment, he argued that the patient could be empowered to recall the experiences and bring them to his awareness.  In doing so, he could challenge it both emotionally and intellectually. He supposed that a patient in these situations could then release it and rid themselves of the neurotic indications. The findings and the theories that the two friends developed were first published in the book Studies in Hysteria in 1895.

The relationship between Breuer and Freud ended when Breuer felt that Freud had made much emphasis on the sexual origin on the patients’ neutrons and he was not willing to look at other factors that could have brought the change. He was not willing at the time to welcome other viewpoints and suggestions by Breuer. Freud in this aspect decided to focus on this point of arguments and he went ahead to refine his own arguments. Most of the contemporaries that he had were that the emphasis he had on sexuality was either overplayed or scandalous just as it had been seen by Breuer. He had an in vitiation in 1909 to give several lectures in the United States. After the visit, he made more analysis on his theories and wrote another book in 1916, ‘Five Lecturers on Psycho-Analysis’. His fame grew exponentially from the arguments that he made in this book.

In 1985, Freud went to Paris as a student where he studied neurology and it is at the school that he met neurologist Jean Charcot.  The 19 weeks that he had in the French capital greatly contributed to the development of his career and opened up ways through which he explained his theories.  It is this time that he spent with Charcot that gave him a lead to his works and some of his theories. He realized during this time that the psychological disorders that patients might be undergoing might have had an origin in the mind. He decided to get more on this. He was also inspired by the practices and the knowledge of the neurologist and when he returned to Vienna the previous year, he set up his private exercise. In his practice, he mainly focused and specialized on brain and nervous disorders.  In his practice, he developed a theory that humans have an insentient in which aggressive and sexual instincts are continuous conflict to gain reign with the defense against them. He began an analysis of himself in 1897 and did a major work in 1900, “The Interpretation of Drama” in which he examined dreams basing them on experiences and unconscious needs(Freud 41).

He was appointed the Professor of Neuropathology at the University of Vienna in 1902. He held this post until 1938. Although in his career and the discoveries that he made at this time and later, the medical establishments disagreed with many theories he developed. Students and followers began to gather and look keenly on some of the arguments that he made and compared them to the medical establishments that they knew or researched about. He worked with some of the students until there the establishments of International Psychoanalysis Association in 1910. The organization was founded by Carl Jung who was his close friend. Carl Jung broke with him to develop his own philosophies.

After the First World War, Freud did not spend much of his time in clinical observations and used most of time on the application on the theories that he had developed to history, literature, art and anthropology. In 1923, he did more research on the theories and published the book “The Ego and the Id” (Freud 78). In the book, the main idea was the suggestion on the new physical model of the mind. He divided the mind in to the “id” and the “superego’.

 Lasting legacy

Freud has remained an icon in the world of psychology and medicine. Many of the theories that Freud developed including those on ‘Psychic energy’ were in no doubt influenced by the discoveries that were made by other scientific discoveries at the time. One of the works that influenced his thinking and actions were those of Charles Darwin. Darwin developed an understanding of humankind as progressive elements of the animal kingdom that led and informed the investigation by Freud on human behavior. Additionally, the new principles that were formulated by Helmholtz which stated that the energy in any given system is constant were used by Freud in studying the human mind. At first, however, he was not uncertain about the exact status of the sexual element in the conception of the mind that he developed.  The work that Freud did have been criticized but there is still no person in history that has influenced the science of psychology as intensely as he did.

Although Freud has contributed extensively in understanding human psychology, there have been many controversies over some of his publications. For example, during his early years, the Nazis openly burned a number of his books in 1933. This invasion by the Nazis was the beginning of his end. In 1938, shortly after the Nazis occupied Austria, he left for London together with his daughter Ann and wife. He had been diagnosed with cancer of the jaw and went through thirty operations. He died of cancer in 1939.

Juliet Eilperin – Demon Fish: Travels Through the Hidden World of Sharks: college admission essay help

In the introduction, the book gives a description of the sharks and what their environment is like. The author, Juliet Eilperin, talks about how amazing the sharks are in the beginning of the book and states that they operate in a another world. Juliet Eilperin also talks about how sharks move and how they interact with other sharks. She compares and contrasts the wide variation of shark species and their interaction as well. Being in the water for the first time is scary enough let alone including sharks. Juliet Eilperin was in an environment filled with sharks 50 miles off the coast of Florida on an island called Bimini. Everybody was very scared for her and told her to be extra careful and aware of every little thing. Comparing this to a car going seventy five miles per hour, this situation was far more worse. All the adrenaline flowing through her blood came from the fear of possibly being eaten by a shark. Though, she is not alone in this risky situation. Juliet Eilperin is taking this journey as a journalist with other brave scientists. As the others took a chance and hopped into the danger, she thought about things that would make her less frightened. She convinced herself that since she was very slim, the sharks will not want to just eat bones and instead devour the others first so she has time to get away. When she physically went into the water, it was not as frightening as she thought it would have been. She got to spot a plethora of species that lived in the ocean. Juliet Eilperin mentions how sharks predate. Sharks were respected and known for their power and destruction; this caused them to be seen as gods. In the past, Sharks seemed to be a threat to the human population but in the present, we seem to be more of a threat to them. Because of the human race exploring roughly seventy five percent of the ocean, sharks have been forced to immigrate to other coasts. For example the coast of Hawaii and California. Technology in tracking devices has made it tremendously easy to track any living being. Scientists tag sharks with trackers and also with microphones and cameras. By doing this, we can receive the most accurate information about the shark species. Sharks are praised for their uniqueness because of their adaptive skills as well as their buoyancy. We have access to details down to their denticles. Their smooth speed come from denticles. One of the top three places to go too to see sharks is located in Tembin. Juliet Eilperin and an acquaintance learn about Tembin culture. They learn how much the shark impacts the culture and the meaning of the shark and how it is essential to survive and spiritually survive. In order to get to Tembin, the scientists attempt to cross by a swampy terrain because the bridge to Tembin was essentially destroyed. During their trip, the team encounters a shark caller named Karasimbe. He is a man with mad respect and is a leader who continues to pursue this hundreds of years old culture. Another shark caller who they call Kiput, lived to about 93 years old who died in 2003, was also greatly respected by the local people of Tembin and provided much guidance, purpose, and hope. Sharks have a great history and have great meaning in the villages who are brave enough to attempt to catch sharks with their bare hands. Although this culture seems dangerous, young men of Tembin villages sell these sharks they catch to other big companies and big cities to make more money. In Tembin today, they seem to care more about gaining a profit rather than just fishing. Elders have started to worry about this problem a lot more because they are not preserving their own culture. This has caused much conflict with making money and preserving culture. With that being said, Karasimbe is convinced that he will save Tembin and its precious culture because he has the potential ability to save everything.

Chapter Two Summary:

Chapter two starts off by informing me about the age of sharks. Everybody assume that dinosaurs have to be the oldest creature ever to live on this earth but little do they know that sharks are just about two hundred million years older. Montana is where you can discover shark fossils found. Aristotle, a greek roman conducted research on sharks a long time ago and aristotle discovered knowledge and names that we still use to this day. Sharks were originally called dogfish in the past by the ancient roman people. Islamic people were truly amazed when they found out the dangers on the Tigris river. The human race is very segregated from the species of sharks and fear the sharks a lot more than the elders did in the past. The ages that were toward the middle, sharks got a bad reputation that led to the ignorance of sharks. People do not care for them and hate on them because they are seen as evil or wicked animals. Sailors also were also scared of sharks because of how they could sink their sailing boat. Sharks reputation has changed tremendously throughout history and time. People used to see sharks as gods and praised them until the human race became selfish and took them for granted which caused sharks to act up and begin to harm us. Now we do not care about ocean creatures for we catch them for profit and for food.

Chapter Three Summary:

Chapter three hands me two main perspectives of the shark hunters or shark guiders and the environmentalists in the world. A man named Quariano who gets paid to guide tourists and visitors offshore and onto the ocean, protects his paying customers and takes them to parts of the ocean were many sharks prowl so that the visitors may have a chance to fish for some sharks. His market target is young men who are living life on the edge and who are willing to take risks like that. Being a professional, it causes Quariano to have very strong opinions and causes him to be very biased. He states that since there are a plethora of sharks in the ocean, killing sharks is not wrong and that it is a very efficient way to make money. The environmentalist perspective is of course the complete opposite. Environmentalists believe that people like Quariano is not good enough to find real jobs in the world. They are leading people on the wrong path by allowing them to harm natural habitats and their niches. By continuing this  hunting, sharks will go extinct and will become endangered because of human selfishness. Chapter Four Summary:

In chapter four, it talks about how in a lot of markets and cultures, the fins of sharks contain much value. They have auctions for shark fins where men offer a lot of money for the fins. In certain cultures, humans believe shark fins have magic or contain supernatural powers. An example of this is to cure aids or cancer. Some so sick that there is no cure. In some cultures, many people believe that owning a shark fin will benefit in ones health. The significance of a shark fin in these cultures is very great. It bears magic that can cure diseases that may not have a cure. Shark fins also can be food. In china and india, shark fins are very rare to receive. It is very expensive to buy and if you are lucky enough to have some as a meal, you would be considered very rich.

David Livingstone Smith’s Less Than Human

“Less Than Human” is chapter 1 in David Livingstone Smith’s Less Than Human, includes various stories of dehumanization throughout history to the present day and elaboration and further comments on those pieces of history. Some of these examples include stories from war and stories of dehumanization in the media. Smith uses these tales from history and his reflections on them to illustrate his purpose of writing this; in fact, from these stories we can easily come up with his argument and analyze it. In “Less Than Human,” Smith clearly provides his purpose, audience, how he decided to arrange this writing, evidence for his argument, implications, and his word choice.

Smith doesn’t get to telling his purpose of writing this selection until the end, but even then, the reader may have to infer what it may be. In the last paragraph, he says that “dehumanization is… widespread… it is found… through the full span of human history, and… the problem of dehumanization is everyone’s problem” (Smith 25).  From this, one can find Smith’s purpose of writing this; that being, to show that dehumanization continues today, that it’s not only a part of history, and that it affects everyone across many cultures. He makes this argument because dehumanization continues today and needs to be stopped before we reach another large war or have a major incident. One example Smith uses to illustrate this is Rush Limbaugh’s radio show on the “Abu Ghraib prison scandal [saying] “[The prisoners] are the ones who are sick” … They are the ones who subhuman” (Smith 22). Smith provides multiple other examples to show his purpose and why he made this his argument/purpose; at the same time, he also uses these to tell who the audience is intended to be by Smith.

The factor of who the audience is and who Smith is directing the argument at is a different story. He uses multiple stories of dehumanization; whether it’s the Israelis versus the Palestinians or the 1946 Nuremberg doctors’ trial, and at least one can relate to whoever is reading the book. Therefore, the audience that Smith intends to reach and reach with his argument is one that is universal; despite this, one can say that the secondary audience is an academic crowd. The reason one could say that his secondary audience is an academic crowd is because of his organization of the paper and his evidence to support his purpose of writing “Less Than Human.”

Smith arranges this chapter in three sections; stories of dehumanization in war, dehumanization in media, and a conclusion. He further divides the first two sections into a pattern that is basically; story, supporting info on the story and introduction to the next story, story, supporting info, so on and so forth until he concludes the section. For example, the excerpt begins with an example of dehumanization occurring between Israelis and Palestinians, it says “Degrading taunts rang out from behind the fence that divided the Palestinian side of the Khan Younis refugee camp from the Israeli side” (Smith 11); afterwards, Smith reflects and elaborates on the story by telling the reader that Khan Younis was a “stronghold of Hamas” and further elaborates on the story, he then introduces another example and repeats (Smith 12).

Smith demonstrates his purpose and caters to his audience through the use of evidence of dehumanization in war and dehumanization in media. This evidence ranges from the Holocaust to Rush Limbaugh’s view on the Abu Ghraib Prison Scandal. Smith illustrates his purpose by providing examples from various points in history like the “1946 Nuremberg doctors’ trial [that] was the first of twelve military tribunals held in Germany [in which] twenty doctors and three administrators … stood accused of wars crimes and crimes against humanity” (Smith 14). Using examples like this, Smith gets the reader’s attention and shows his purpose before actually saying it on the last page. Smith also uses this evidence to cater to whoever reads this excerpt by using examples in which a person could relate to at least one example or imagine an example in their lives like those in the paper. One such piece of evidence would be when “on September 4, 2007, the Columbus Dispatch published a cartoon portraying Iran as a sewer” (Smith 22); one person in the audience could imagine or relate to this by remembering something they read that made them feel uncomfortable, etc. Smith also uses such evidence to hint to or provide an implication or suggestion to the reader.

Smith provides a specific recommendation to the audience; however, part of it is stated and the other part is implied. He never deliberately states the whole suggestion, but Smith states a part of it in the last paragraph.  He states this part of his suggestion by saying that “We are all potential dehumanizers, just as we are all potential objects of dehumanization. The problem of dehumanization is everyone’s problem” (Smith 25). By saying this, Smith states that dehumanization is everyone’s problem and that we are affected by it. While this is deliberately stated, he eludes to the other parts of his purpose that it continues today, not just a part of history by; once again, his use of evidence from different points in time.

Smith does repeat specific words or phrases like dehumanization; but he also uses specific types of words or phrases. The most repeated word or words used is dehumanization and the other versions of that word, this is done since it is the topic of this excerpt. More interesting is the special type of word or phrase Smith uses, that being derogatory names or phrases that one side calls its enemy; specific examples of this would be what the Nazis called their victims, “Untermenschen – subhumans” and another being what the Japanese called the Chinese or “Chancorro [meaning] below human, like bugs or animals” (Smith 15- 18). He uses these to his purpose that dehumanization affects everyone across multiple cultures and that it is still a part of everyday life.

Throughout “Less Than Human” from Less Than Human, Smith uses tales of dehumanization in history to the modern day to present his argument. In using these the argument can be studied and interpreted by his audience. By doing this, Smith’s purpose, audience, arrangement of the excerpt, evidence, suggestions, and word choice is shown and can be recorded.

 The Brief Wondrous Life of Oscar Wao by Junot Diaz

In the novel,  The Brief Wondrous Life of Oscar Wao by Junot Diaz, the main character Oscar Wao struggles with obesity and finding love throughout his life. His misfortunes are blamed on a curse that haunts him and his family. Like Oscar and his family, real life people struggle with the same issues as they do. The author addresses real life issues while incorporating a cultural history and background. Although the readers of The Wondrous Life of Oscar Wao may not all relate to having a Dominican family, the author addresses issues readers can associate with such as struggling with love and family.

The Fuku curse is a historical curse that dates back hundreds of years back into Oscar’s family. Throughout the novel, the narrator spends a great deal of time trying to convince the readers that every bad detail that happens in Oscar’s family’s life is due to the Fuku curse. All through the novel, Oscar struggles with with finding love. As a child, he was a playboy and flirted with all the girls but as he grew up, he became a nerd and had trouble finding a girlfriend. Just like Oscar, many people struggle with finding love in their lives. His first love is named Ana Obregon and she leads him on and then returns to a relationship with her abusive ex-boyfriend. In college he falls for another girl, who yet again has rejected him. Oscar responds by trying to kill himself by jumping off a train, but he does not succeed. After college, he goes on a trip with his sister and mother to the Dominican Republic where he falls in love with a prostitute named Ybon. Unfortunately Ybon has a boyfriend who is the captain of the police force. Everyone warned Oscar not to see Ybon but Oscar had believed he finally found true love. Eventually Ybon’s boyfriend finds Oscar and badly beats him up resulting in Oscar’s mom sending him back to the States. But, Oscar can not seem to get Ybon out of his head and asks his best friend for money to fly back to the Dominican Republic to see her. Ybon’s boyfriend then finds him and kills him.

The misfortune with love does not only affect Oscar, but other people in his family too. His mother for example had trouble with men her whole life, starting in grade school. His mother, Beli, finally got the boy she liked and they were have constant sex in the broom closet at school. When they got caught, he blamed it on Beli even though he promised to marry her and she did nothing wrong. The boy was then shipped off to military school. Beli later on meets another man referred to as “The Gangster”. No man had ever appreciated her the way he did and she ended up falling in love with him and pregnant with his child. As it turned out, The Gangster was married and his wife sent people to go and beat her near to death and she ended up losing her baby.

Like all families, there is conflict and tension between the members of the Wao family. Beli, the mother of Oscar and Lola, had been diagnosed with breast cancer. Lola responds by becoming goth and shaving her head all while taking care of her sick mother. The cancer eventually goes away but at dinner one night, Beli announced it had come back and Lola just asks her to pass the salt in response which causes Beli to hit Lola. Lola sees this as a chance to be free from her controlling mother once and for all and runs away to live with her boyfriend. Things were not going well at her boyfriend’s house, so Lola call Oscar to meet her at coffee shop where she sees her mother and uncle waiting for her. Lola runs and her mother chases her and falls, Lola rushes to help her but it turned out her mother had faked the fall. The members of the Wao family all went through many unfortunate events in their lives but they do not share them with each other. No one likes to talk about their past with each other which can lead to tension in a family.

Even though the novel, The Brief Wondrous Life of Oscar Wao is relatable mostly to Dominican readers, there are still aspects of the book that non-Dominican readers can relate to such as the struggles with love and family.

Guns, Germs, and Steel – Jarod Diamond: college application essay help

When reading Guns, Germs, and Steel, one may assume that the author has an extensive background with history, mainly focusing on prehistoric times. This assumption would be incorrect. Jarod Diamond, the author of Guns, Germs, and Steel (as well as The Third Chimpanzee, Collapse, and The World Until Yesterday) began his studies in physiology, which then turned into him also studying biology, geography, and many other disciplines as stated by his website (http://www.jareddiamond.org). Although his main interests are about the sciences, he continues to write books about the history of the world. This is because he does something that many authors cannot do: he uses his biology background to make sense of the world and how we got here. For example, on page 53, Diamond is talking about environmental effects on the Moriori and Maori people. To show how the environmental effects could affect these people, he brings up an example of placing lab rats in different environments to see what happened. Most historians would not think of an example like that; only someone with a biology background (such as Diamond) would. By using two disciplines to explain a topic, this forces the reader to not only think about the historical side of something, but also the biological side, thus expanding the readers knowledge on the topic.

One of Diamond’s central questions asks why did history unfold differently on different continents. He is trying to see and explain why some areas developed the way that they did. In part one, he uses each chapter to explain how a culture grew. For example, Diamond focused on Moriori and Maori in chapter two, and Cajamarca in chapter three. This question and answer is intriguing because it challenges the common perception that everything happened at the same time. It also makes people realize that humans did not exist everywhere in the beginning of time. They evolved from the great apes of Africa, and migrated from there and created families everywhere they stopped (36). Some people may disagree with Diamond’s ideas and say that humans may have evolved differently or that they did not travel to certain places in a certain order. He addresses this by stating what other historians think, explaining why they think that way, acknowledging that either theory could be correct, and continues on with what he believes about the situation.

One of the ways Jarod Diamond answers his question is to look at the fossils that were left behind. From this, he can tell who or what was in this area, what they did to survive, what they ate, how advanced they were, as well as during what time period they were in a particular area. By piecing together different bones, he can determine what type of animals existed in a particular area. For example, if he found Homo erectus bones, then he would know that they existed there. This can also apply to fossils not found in an area. If there are no fossils present, then we cannot be sure if a species lived in that area. Arrowheads, writing utensils, and spears that are found can also tell us if they had the technology to build and use these things for a purpose. Diamond also determines when these species were living, however his method differs from other historians. He uses calibrated radiocarbon dates instead of the usual uncalibrated radiocarbon dates because it provides dates that are closer to calendar dates. This can confuse readers because if they do not know the difference between the two, they may think they are receiving false information when they are actually seeing two correct dates that are relative based on calibration. I find this useful because it gives the date a meaning that is relative to the calendar that we use today. It makes it easier to create and connect a sequence of events.

One thing that Diamond does well in Guns, Germs, and Steel is taking other historians viewpoints into consideration. He recognizes that his way and thoughts is not the only way to think, and that other historians can disagree with him and still be correct. Diamond explains this on page 37 when he is talking about the earliest “X”. Here, he states that when someone finds the earliest existence of something (X), it challenges all other beliefs of when X first existed. He also acknowledges that it can take an extensive amount of research to confirm when X actually happened. This allows the reader to stay open minded, and to not be completely set on a fact because it can change when new information is found.

A weakness of part one would be that it can get pretty dull. For someone who does not gravitate towards history, this book can become very boring very fast. This then leads the reader to start to only read the words on the pages, rather than to comprehend and analyze them. When this happens, the reader could miss a lot of information, which leads to them rereading the same passage over and over again, adding to their frustration. To fix this, I would remove the parts where Diamond seemed to drone on about the same thing, as well as try to engage the reader more by forcing them to think critically about the topics they are reading.

The Immortal Life of Henrietta Lacks by Rebecca Skloot

Introduction

For the book club assignment, I chose to read The Immortal Life of Henrietta Lacks by Rebecca Skloot. The book was originally published in 2010 by Crown Publishers. However, the copy of the book I read was published in 2011 by Broadway Books, a partner of Crown Publishers. The book is about an African-American woman named Henrietta Lacks. She was diagnosed with cervical cancer and when she went to John Hopkins to be diagnosed and receive treatment, her tumor was biopsied and cultured. Her cells were grown and led to an immortal cell line. George Gey, the scientist who grew these cells, was the director of the lab at Hopkins and it was his work that helped to create this immortal cell line. The book explains how this ability to grow cells led to many medical breakthroughs including the testing of the polio vaccine and research about cancer cells. Although these breakthroughs have saved numerous lives and advanced modern medicine, the ethics of it is called into question. Henrietta did not know her cells were taken from her and used for research. Neither she nor her family were ever compensated for their contribution to advancing medicine. Finally, there was no informed consent and therefore, her cells, known as HeLa cells, have become a giant for profit business that is of no benefit to her children, husband, or other family members. I chose to read this book because I had heard of HeLa cells during my undergraduate coursework. I took a cell biology course and we discussed the benefits of the cells. However, as it talks about in the book, the medical advances were celebrated in my class but how the cells were grown was completely left out. I was curious to learn more about the famous HeLa and so I chose to read this book.

Summary of Contents

In part 1, titled Life, the first unethical situation arises. Henrietta had just been diagnosed with cervical cancer. She had discovered the tumor herself shortly after giving birth to her daughter Debora. The tumor had grown so fast that it was not in her medical charts. The doctor who diagnosed the cancer noted that after delivering the baby 6 weeks earlier, there was no note “made in the history at the time, or at the six weeks’ return visit” that would indicate cancer.1

As she traveled to John’s Hopkins, the nearest hospital that could treat her and treat other people of colors, she went to the doctor, told him where to look for the tumor and sure enough, a mass was found on her cervix. It is not known for sure in her case, but many black people were treated poorly by the doctors at this time.2 However, what she wasn’t told was that the doctor biopsied her cancerous tumor and was going to attempt to grow them outside the human bodied. She left the doctor with her diagnosis and her treatment of radium being inserted into the cervix and went home happily and peacefully.

Meanwhile, at the Gey labs at John’s Hopkins, Mr. Gey began to grow and cultivate the cells. It became an incredible breakthrough that would eventually lead to other immortal cell lines being grown. The new cells, called HeLa cells in this case, we’re going to become essential in discovering advances to treat diseases. However, the question remains: Did the doctor have any right to remove the tumor, experiment with the cells, all without telling the patient? It would appear that from a public health standpoint, the greater statistical number, or the population, was benefitted by the doctors taking the cells. However, on an individual level, it is a terrible precedent and very unethical that they would take the cells without asking her, without compensating the family. I believe this is a very crucial philosophical argument: What is the price to benefit the greater good? Part of the issue here is also that not only did they not inform her what they were planning on doing, they did not acknowledge her real name until an article appeared in 1973 that mentioned her name could be Henrietta Lacks and not Helen Lane.3

Another example of the unethical behaviors in the book come from one of the researchers who benefitted from the HeLa cells. Dr. Southam, a physician studying cancer, wanted to know if the cells could grow inside another person’s body. Using his terminal patients as guinea pigs, Dr. Southam injected the HeLa cells into the patients under the cover up that the injections were testing the immune systems of the patients. As a result of his experiment, he saw that cancer did grow in the patients. The cost however was that 4 patients could not have the cancer removed completely and one of the patients had the cancer metatisize through their body.4

He did not stop at these patients. Once proven they could grow in terminal patients, he wanted to see the cell effects on healthy patients. So, he found a population which could be coerced into doing things against their will, a prison population in Ohio. Instead of educating and promoting good health to this population, Dr. Southam decided to inject all the inmates with the cells and observe their reaction.4 He did learn a lot about resistance to cancer from these healthy inmates. However, as a public health official, he was not actively promoting good health. Rather, he was endangering the health of the population he was studying. The fact that this endangerment was occurring shows the lack of ethics used in this time period. Although good did come of it, one has to wonder if there could have been a better way for the research advances to be made.

Conclusion

In conclusion, The Immortal Life of Henrietta Lacks was a very difficult read. Not difficult as in hard to understand the words and meaning. Rather, difficult as in thought provoking and leaving a general feeling of uneasiness. The underlying issue of the ethics of medical research as well as how Henrietta Lacks was treated is put side by side with the advances in medicine that came because of the ability to culture her cells. This leaves an uneasy feeling in the stomach as one tries to wrestle with what is more important, the individual or the “greater good”. Understanding that statistics can be changed to justify one’s actions, I feel that the action taken has benefited society as a whole. However, as someone who aspires to become a full-time physician, the practice is completely unethical. In a perfect world, I would like to see the family of Henrietta be compensated today for the enormous breakthrough courtesy of her cells, especially considering the cell industry is now a multimillion dollar industry.5 It is terrible to read of her family and learn that after the death of their mother, her children were abused, molested, and suffered into adulthood because of these traumas. In my opinion, you can not put a price on a life. However, financial compensation is the least that could be done to support her family and possibly improve their socioeconomic status. Supposing that they could be compensated fairly, I would also like them to continue supporting research using the advances that have occurred so that more diseases and issues can potentially be solved. The cells have led to numerous breakthroughs and who knows how many more will come because of it. Yes, it is a cop out answer of staying right on the fence, yet I believe it is the correct answer.

This research has benefited the world. Public health is about population. The population is the patient and it is the job of public health professionals to do all they can to implement practices and policies that promote and sustain healhy populations. Because of the HeLa cells, population health was improved. While the book illustrates the clear unethical decisions made with regard to the HeLa cells, the Lacks family, and the other experiments mentioned, the advances made from studying her cells have led to medical breakthroughs. Vaccines, treatments, knowledge about infectious diseases, all have been influenced by the culture of immortal cell lines. Were it not for the cell line, perhaps these advances would not have occurred.6 Therefore, I believe that while unethical, Henrietta Lacks unknowingly advanced the field of public health and has contributed to making our society a healthier environment for all people.

American Democracy in Peril by William E. Hudson (the fifth challege): essay help free

Reading Response #2

The Fifth Challenge: Elections Without the People’s Voice

In the book “American Democracy in Peril,” the author William E. Hudson discusses eight challenges that America would face sometime in its history. In the fifth challenge, Hudson argues that, just like the Separation of Powers (discussed in the first chapter), elections are not an indicator of democracy but a tool that has become a major challenge to it. Hudson also argues that in order for elections to be democratic all citizens must have equal representation, elections must enforce deliberation about public policy issues and must control the government’s actions.

The equal representation in elections comes with the equal right to vote, where each individual has the same amount of power (one vote). However, what Hudson wants us to take out of this chapter is that equal representation seems to be violated in many ways. One of them being the way that the Senate is organized. Since the Senate is composed of two senators from each state, voters in the least populous states are more in control of the Senate than voters in larger states. As a result, twelve states containing less than 5 percent of the US population have control of a quarter of all the votes in the senate. Similarly, the House of Representatives also fails to represent a large number of people using the single-member plurality electoral system. This system gives “the victory in an election to the candidate who wins the plurality of votes in a district,” the result being that the individuals who didn’t vote for the winning candidate don’t get represented also violating equal representation. Hudson also accuses the Electoral College system of violating equal representation since it fails to represent all the voters, just like the Senate, and uses “the single-member plurality” electoral system’s tactics.

Still analyzing equal representation, or the lack of it, Hudson talks about money elections which I found very interesting. I was unaware that the candidates in political campaigns depended on funding to keep their campaigns alive. After reading the passage “The Money Election”, my opinion is that, in fact, campaign funding prevents equal representation and that all candidates’ political campaigns should be worth almost the same. This would allow all the candidates to have the same opportunities and, therefore, a fairer campaign not only for the candidates but also the voters who will have the chance to vote without a money election made first.

Similar to the equal representation, individuals in our society participate in public deliberations by voting. During the political campaigns, candidates will express their ideas and views so that the voters can vote. This vote is a way for the individuals to express what they expect their society to be and what changes they want to see, by voting in one candidate that has the same expectations and beliefs, and wants to achieve the same results. Also, the goal is for elections to enforce deliberation about public policy issues, however, it can get tricky when the sources that the voters use to get the information necessary for them to make a decision or deliberate are ineffective. According to Hudson, there are two main sources of information that the voters use for democratic deliberation–the news media and the campaigns–and they both have been failing to provide voters with useful content for democratic deliberation.

I completely agree with Hudson that the news media nowadays is not a reliable source of the candidates “serious proposals for addressing the country’s problems” (CITATION). I believe that this is mainly because the news industry main focus is not to deliver important and serious informative content, rather, its main focus is making news attractive and controversial to hold the attention of the viewers so that it could make more money. It cannot be forgotten that viewers are also guilty since they are the ones who feed this kind of news. As a result, when it comes to presidential elections, the news media has become a reliable source of drama between the candidates, political scandals, and all the less important issues that cannot be used for deliberating about public issues.

Campaigns are another source of information that the voters use for deliberation on public issues. However, these campaigns are being used a tool to transmit messages that will “stimulate a positive or negative reaction” on voters, where the ultimate aim is to win votes. Just like the news media, many campaigns don’t focus on promoting serious discussion on policy issues which makes it harder for individuals to deliberate over these same issues/policy issues and make a decision about who they are going to vote for.

After analyzing how elections are connected to equal representation and public deliberation, there’s still a need to understand how it controls the government’s actions. Hudson argues that since the elections are decided “on the basis of sound bites, debate gaffes,  and campaign image manipulation” they fail to really give us an idea of what the elected officials’ specific agenda is, and since they are already in power, these officials decide for themselves without the “democratic electorate’s control” (CITATION). Political parties tend to be the ones who/that try to enforce the voters’ control over the government’s actions by making policies that reflect their voters’ preferences. Also, these parties help the voters to hold someone responsible if they don’t agree with what happens after the elections. One thing that makes this possible is that now political parties have different sets of “principles, ideas, and policies” that allow voters to differentiate them, and also allows parties to compete in elections. In conclusion, Hudson believes that if the elections fail to control the government’s actions is/it’s not because the parties stopped being in favor of the voters, but because there was no equal representation or significant deliberation during the election.

The Great Depression – biggest causes: college application essay help

The Great Depression was one the worst time periods in American history. The Great Depression started in 1929 and ended in 1939. It started in America with the crash of the stock market and then later began to have a big impact globally. As shown in Document 1, the Great Depression was the worst economic downfall in American history. Millions of people were left unemployed and searching for nonexistent jobs. It was a common sight to see children begging on the roads. Furthermore, banks started to fail and people started to lose any savings that they had. Overall, the main causes of the Great Depression were the stock market crash of 1929, the reduction in purchasing, and the abuse of the major economic ideas.

The stock market crash of 1929 was the biggest cause of the Great Depression. The stock market crash impacted millions of American people. Before the stock market crash, many Americans were getting greedy. They were continuously buying more and more. As described in Document 10 after the Americans “bought all they can afford they go on buying, a little down and the rest in easy payments.” (Document 10). This method of buying with installments was bad for the economy. Elmer Davis foreshadows the Great Depression when he states, “the bill will be all the larger when it finally has to be faced.” (Document 10). Another reason that Americans got greedy was the speculative boom in the stock market. As described in Document 5 there was a “speculative boom that developed with increasing intensity in the years after 1927.” (Document 5). The speculative boom made Americans greedy as they were hoping to make quick profits from the speculative rise. However, as more Americans began to invest in stocks, the prices started to be forced upwards. These forced up prices were a result of “competitive bidding rather than by any fundamental improvement in American (business).” (Document 5).  As a result of the speculative boom, investors bought more stocks. However, when the stock market crashed, the investors with the most stock were trying to get rid of it as fast as possible. This lead to the stock prices being dropped drastically. This is shown in the newspaper title in Document 3 which is “Stock prices slump $14,000,000,000 in nation-wide stampede to unload” (Document 3). The drop in prices would have been a good way to jumpstart the economy, but Americans were no longer buying anything. Most Americans stopped buying stocks which was worse for the economy since it cannot grow without consumers. Overall, the stock market crash of 1929 was one of the greatest causes of the Great Depression because it completely dropped the prices of all stocks and put millions of Americans into poverty.

After the stock market crash of 1929, many Americans were reluctant to buy anything. Also, many Americans were too poor to be able to buy anything besides the absolute necessities. After the stock market crash, many Americans lost their jobs. As shown in the table in Document 4, unemployment rates were drastically rising after the stock market crash. Without jobs, Americans could not purchase anything and this made the country continue downwards. Maintaining a family became extremely hard since many adults were losing their jobs. The hardships of family life are further explained in Document 7 where the average mill worker describes her daily lifestyle. In her story, she explains that her income combined with her husbands is just barely enough to support her entire family. This means that the average family did not have much money leftover to spend on other items and luxuries. The table in Document 9 further supports this argument because it shows the average US family income distribution. After the stock market crash, nearly 60% of American family’s annual income was under the poverty line (Document 9). This showed that the families under the poverty line could not afford anything other than the absolute necessities which meant that they could not purchase other luxuries. Overall, the reduction in purchasing was one of the causes of the Great Depression and it was happening because of the unemployment which led to a lack of money.

The abuse of economic ideas was one of the smaller causes of the Great Depression. As described in the background essay, the 4 major economic ideas are the law of supply and demand, say’s law, the business cycle, and the stock market (Background essay, 437-439). Before the Great Depression started, American people were breaking some of these economic ideas. As described in Document 6, “consumers bought goods on installment at a rate faster than their income was expanding” (Document 6). By buying goods on installment, it meant that people would pay over time. This purchasing style was okay in the beginning but after a while, it had serious consequences since many people were gaining debt and their income wasn’t capable of paying the installments. Also, this type of purchasing broke the law of supply and demand since the supply and demand for goods remained the same, but people didn’t have money to buy goods and had to use installment. This meant that there would be a time where people would stop buying which would lead to a sap in the economy (Document 6). Furthermore, Document 10 describes that people continued buying even after they couldn’t afford it. This shows that people broke the business cycle because usually if people stopped buying after they couldn’t afford it, then production slowed and workmen were fired. However, in the years before the Great Depression, people used installments and continued buying which broke the business cycle. Additionally, the farming economy also started to abuse the law of supply and demand. Farmers started to overproduce items in hopes of being able to sell more. However, this overproduction backfired and as shown in Document 11 the prices of goods completely dropped. The farm industry fell as farmers were forced to sell their goods at a very low price. Overall, the abuse of economic ideas impacted the Great Depression since people started paying with installments and breaking the business cycle and the law of supply and demand.

In conclusion, the Great Depression was caused by many different factors. The greatest cause for the Great Depression was the stock market crash of 1929 which put millions of Americans into poverty and made many lose their jobs. Additionally, the Great Depression was also caused by the reduction in purchasing since many were unemployed and couldn’t afford to purchase anything besides the absolute necessities. The reduction in purchasing also kept the economy down since it can’t grow without consumers. Furthermore, the abuse of the major economic ideas also had an impact on the Great Depression. Overall, the stock market crash of 1929, the reduction in purchasing, and the abuse of the major economic ideas were the three major causes for the Great Depression.

An Abundance of Katherines by John Green

Synopsis.

The novel I studied is “An Abundance of Katherines” that was written by John Green. This book was published on September 2016 by Dutton and Speak. The genre of this book is fiction. The main characters are Colin Singleton, who is anagram-loving seventeen-year-old boy who is depressed. Hassan Harbish who is Colin’s lazy, funny, and slightly overweight best, and only, friend. Lindsey Lee Wells who Colin and Hassan on their road trip in Gutshot, Tennessee. Hollis Wells, Lindsey’s mother. She is an extreme workaholic. The conflict of this story occurs between Colin and the other Colin when Colin Singleton finds the other Colin cheating on Lindsey. The other Colin threatened Colin if he told Lindsey and Colin was forced to decide to tell Lindsey or not. Though Colin made his decision to tell Lindsey what happened. This lead to her breaking up with the other Colin and a very brutal beating for Colin Singleton and his best friend Hassan.

Colin Singleton is a child prodigy who is fearing he will not grow to become an adult prodigy. After being dumped by his girlfriend, Katherine XIX, Colin is looking for his “missing piece” longing to feel whole, and longing to matter. He hopes to accomplish his goal of becoming a genius by having a “eureka” moment. Over the span of his life, Colin has dated nineteen girls named Katherine, all spelled in that manner. In these relationships, Colin remembers only the Katherine dumping him.

After graduating from high school, and before college, Colin’s best and only friend, Hassan Harbish, convinces him to go on a road trip with him to take his mind off the breakup. Colin goes along with the idea, hoping to find his “eureka” moment on the way. After driving all the way from Chicago to Tennessee, they come across the alleged resting place of the body of Archduke Franz Ferdinand. There, they meet Lindsey Lee Wells. After a short time, Colin and Hassan find themselves employed by Hollis, Lindsey’s mother who runs a local factory that is currently producing tampon strings. They live with their employer and her daughter in a rural town called Gutshot, Tennessee. The employment she sends them on is to interview all current adult residents of Gutshot and assemble an oral history of the town. As time passes, Colin finds himself becoming attracted to Lindsey, though matters are somewhat complicated by her on-again, off-again boyfriend Colin. He and Hassan call him TOC which means “the other Colin”. Our Colin, the prodigy, is still chasing his eureka moment, finally finding it in his theorem he created called the Theorem of Underlying Katherine Predictability. It is meant to determine the curve of any relationship based on several factors of the personalities of the two people in a relationship. It would predict the future of any two people. His theorem eventually works for all but one of his past relationships with a Katherine. It is later discovered by Colin that he had dumped this Katherine (Katherine III), rather than the other way around. The graphs all make perfect sense at this juncture. As Colin’s story is revealed to the reader, we find that K-19 was also the first of the Katherines, “Katherine the Great.” While the back stories of Colin’s life play out, Hassan gets a girlfriend, Katrina, a friend of Lindsey’s. The relationship is cut short when Colin and Hassan catch Katrina having sex with TOC while on a feral hog hunt with Lindsey, her friends and Colin’s father. A fight between TOC and all of the surrounding acquaintances begins when Lindsey finds out that he’s been cheating on her. While recovering from a knee whack to the groin, Colin anagrams the Archduke’s name while in the grave yard to dull the pain, and realizes that it is actually Lindsey’s great-grandfather, named Fred N. Dinzanfar, that is buried in the tomb.

Colin finds Lindsey at her secret hideout in a cave that she had shown him previously, where he tells her the story of every Katherine he has ever loved. Lindsey tells him that she feels so self-centered, claiming that she does not feel sad but instead slightly relieved by TOC’s affair. They discuss what it means to them to “matter” and eventually confess their love for each other. As their relationship continues, Colin decides to use his dating formula to determine whether or not he and Lindsey will last. The graph reveals that they will only last for four more days. Lindsey then slips a note under his door, four days later, stating that she cannot be his girlfriend because she is in love with Hassan. But she leaves a P.S. stating that she is joking. Colin realizes that his theorem cannot predict the future of a relationship; it can only shed light on why a relationship failed. Despite this, Colin is content with not “mattering”. Hassan also states that he is applying for two college classes, which Colin has been trying to convince him to do throughout the book. The story ends with the trio driving to a nearby Wendy’s. Lindsey states her desire to just “keep going and not stop.” Colin takes her advice, as a transcendental and ecstatic feeling of a “connection” with Lindsey, Hassan, and everyone not in the car surges through him. He has finally found peace and happiness via connection with other people, rather than from the pursuit of distinguishing himself from everyone, feeling “non-unique in the very best way possible.”

Tones, themes of the story and issues presented by the Author.

There are many tones in this novel such as happy, insecure, and hopeless. For the first one, happy. Mrs. Harbish shook her head and pursed her lips. “Don’t I tell you,” she said in accented English, “not to mess with girls? Hassan is a good boy, doesn’t do this ‘dating.’ And look how happy he is. You should learn from him.” (chapter 3, paragraph 15) In a lot of ways, Hassan’s mom is right, Colin would be much happier if he didn’t mess around with the Katherines. He couldn’t whine about them dumping him then. On the other hand, we’re not sure Hassan really qualifies as the best sample of happiness; he even admits later on that he’s lazy and should do something else with his life.

The next tone would be insecure. With all the nasty back-and-forth, Colin fought the urge to ask Katherine whether she still loved him, because the only thing she hated more than his saying she didn’t understand was his asking whether she still loved him. He fought the urge and fought it and fought it. For seven seconds. (chapter 5, paragraph 85) That’s a really long time to wait. Oh wait, it look longer than seven seconds to read that sentence. That’s the whole point: Colin is so impatient and needy when it comes to love. He can’t just leave Katherine alone for one minute without asking her if she loves him, which sounds both pretty insecure and pretty annoying. The last tone is hopeless. “Technically.” Colin answered, “I think I might have already wasted it.” Maybe it was because Colin had never once in his life disappointed his parents: he did not drink or do drugs or smoke cigarettes or wear black eyeliner or stay out late or get bad grades or pierce his tongue or have the words “KATHERINE LUVA 4 LIFE” tattooed across his back. Or maybe they felt guilty, like somehow they’d failed him and brought him to this place. (chapter 3, paragraph 7) After he tells his parents about the road trip, he lets them in on a secret: his potential is already wasted. We’re not so sure about that. You can still have hopes and dreams and be an all-star even if you don’t have a huge eureka moment. Too bad Colin doesn’t believe that.

Themes of the story is life, consciousness and existence. Not to go all parental on you, but it’s time to ask some heavy-hitting questions: what do you want to do with your life? What’s the purpose of life? If you’re in high school, chances are your parents are always bugging you about which college you want to go to, or what major you want to take. It’s the norm for us to think about these things when we get to those teenage years, and Colin and Hassan are plagued by these questions too in An Abundance of Katherines. And in true young adult novel form, they come up with different answers to these questions. Colin wants to study, study, study, while Hassan is happy watching TV and doing nothing. The thing is though, both of them start to reconsider their life’s goals and path towards the end of the story.

The first issue that has been presented by the author in “An Abundance Of Katherines” the boy who’s been dumped several times which is Colin Singleton. He feels a desperate need for people to remember and appreciate him. In the beginning of the story, once Katherine the 19th (is what he called her) dumped him because she felt that he was more into being the only smart person around and cared too much about being told how much she cared about him than the relationship itself. As soon as it happened, Colin had felt broken especially since she was his first “actual” love and had dated her for a year and eight months. Hassan who’s Colin’s loyal and dearest best friend, wanted to do anything to cheer him up so he took him on a road trip. Colin thought that no one really appreciated him as a person, and they didn’t care about him after Katherine dumped him. So Hassan wanted to prove a theory that if he went on this road trip with him, it would get his mind off of the break up with Katherine the 19th. Little did he know that he was about to have his whole perspective on himself changed for the better. Hassan and Colin drove to Gunshot, Tennessee and found a extremely attractive tour guide named Lindsey who he automatically grows a connection with. While the tourist started to give both Hassan and Colin the tour on Archeduke, he realizes Lindsey had only dated one person who’s named Colin. Except, her Colin as Hassan calls him “TOC” which means “The Other Colin” is the complete opposite of Colin Singleton. TOC was a jerk lets put it that way. As Colin and Hassan stay in Gunshot, they start to get to know Lindsey better. And Colin, let’s say he’s falling in love again, it’s going to be tough for him since he knows she has a boyfriend. TOC starts to show his true colors as the story progresses. Lindsey finds out that Colin had been cheating on her with the hottie with a body, Katrina. Lindsey is beyond upset as soon as she finds out. Once Colin found out, he relieved Lindsey from those negative feelings that she had. I mean, a break-up isn’t always an easy thing to get over, and Colin knew exactly how she felt. One quote that really stuck out to me through this main issue is when Lindsey tells Colin: “If people could see me the way I see myself – if they could live in my memories – would anyone love me?” That quote stuck out to me because it shows that it’s not just Colin who feels like no one appreciates him or cares about him, but Lindsey does too. And it’s good for Colin knowing that he has someone who also knows how it feels when it comes to someone loving him. That quote and both Colin and Lindsey both show that they lose themselves after a tough break-up. It took Colin a while to be himself again, and he’s willing to help Lindsey get over the break up and be herself again. The next issue is, the journey of getting know ourselves. Are you unique? What makes you, you? That’s one of the big questions An Abundance of Katherines asks us to think about. We’ve got a washed-up child prodigy who wants to matter, but he’s just not sure if he’s unique any more. Then we’ve got Lindsey who’s faked it so much that she’s one big phony most of the time. She wants to fit in, so she pretends to be nerdy, ditzy, southern just to do so. It’s easy to lose sight of who we really are deep down in our cores, and this book is all about questing to get in touch with our true selves. The last issue would be Person vs. Self because Colin is a child prodigy trying to be a genius. Colin wants to do something big with the way he lives his life. Like become a genius. He needs to discover himself and what he’s meant to be here for before he becomes a genius. He’s been dumped so many times throughout his life. He dates girls named Katherine, all spelled in that manner. The conflict is resolved because he comes up with an equation to calculate how long until or why he gets dumped.

Critical Analysis.

The novel I studied is “An Abundance of Katherines” that was written by John Green. This book was published on September 2016 by Dutton and Speak. The genre of this book is fiction. The main characters are Colin Singleton, who is anagram-loving seventeen-year-old boy who is depressed. Hassan Harbish who is Colin’s lazy, funny, and slightly overweight best, and only, friend. Lindsey Lee Wells who Colin and Hassan on their road trip in Gutshot, Tennessee. Hollis Wells, Lindsey’s mother. She is an extreme workaholic. The conflict of this story occurs between Colin and the other Colin when Colin Singleton finds the other Colin cheating on Lindsey. The other Colin threatened Colin if he told Lindsey and Colin was forced to decide to tell Lindsey or not. Though Colin made his decision to tell Lindsey what happened. This lead to her breaking up with the other Colin and a very brutal beating for Colin Singleton and his best friend Hassan.

The main idea of the work is the boy who’s been dumped several times which is Colin Singleton. An Abundance of Katherines follows with Colin Singleton, a prodigy with an obsession for anagramming. Colin has a very specific type when it comes to the opposite sex: he only dates girls called Katherine. And so far, he’s been dumped by 19 of them. We follow Colin as he ventures into the unknown on a road trip with his best friend, Hassan. He encounters all sorts of things on his travels, from feral satan hogs to scrabble. The structure of this novel jumps around and is not in chronological order, it goes to flash backs of Colin’s past and then goes to the future again and does this repeatedly throughout the novel. The novel is third person omniscient and a quote from the novel is “As Hassan screamed, Colin thought, oh right, should have flushed.” which this point of view is significant throughout the book because the reader is not stuck reading about the same person the whole time. Also there is no bias in this novel.

I loved the plot of this book. Normally, road trips just annoy me because it is far too cliche. But, in this book, it really works. A road trip is perfect for Colin, as the ever-changing, exciting and foreign atmosphere is just like him. As the scenery changes, Colin changes as a person. I couldn’t help but see a deeper meaning in this story. On the surface, it is the tale of a prodigy on a road trip, but there is so much more than that. The novel carries some very important messages about fitting in and about trying to see logic in everything. In the hands of some authors, this would become a cheesy parable. Luckily, Green is skilled enough to make it sincere. He understands teenagers, particularly those who are nerdy and socially awkward. This gives the book a friendlier tone, which is great. What I don’t really like about this book is how Colin needs to go through a few heartbreaks and it was all came all the way from those girls named Katherines. Those Katherines should have not left him at the first place for they should have appreciated Colin for loving them so much. But finally, he hurt himself. Nineteen times of heartbreaks for he had fallen for nineteen girls named Katherines.

Dating nineteen girls, all coincidentally named Katherine seems to be a ridiculous phenomenon to a teenager whose age is only 17. This might not happen in reality. Such phenomenon can be considered as something fancy. The author here employs magical realism as he is able to translate his experiences into something that seems to be fictional in his literary work. Through writing An Abundance of Katherines, he was able to inculcate fantastical elements that were drawn from reality. The possibility of dating nineteen Katherines in a span of 17 years is quite remote, but the author managed to turn it into something fictional and at the same time realistic. A major part of this book is The Theorem of Underlying Katherine Predictability. This is a complicated idea that Colin comes up with, and it’s basically a graph that can supposedly predict when and how two people will break up. Personally, I found the idea that love can be graphed really interesting, but it might bore some readers. Luckily, you don’t need to understand the maths to enjoy the plot. The theorem is really just a vehicle to show how Colin is a prodigy, and to help him reach his final conclusion that “The future is unpredictable.” and I do think that the formula is biased. It only represented and summarized only what happened in the past and is not a viable representation of what happens in the future. This can be applied to real life. Sometimes we stick into something objective that we fail to realize that there are missing pieces that we do not consider. We tend to be close-minded and miss opportunities in life. In life, we sometimes have to take risks and modify our own formulas. To conclude, An Abundance of Katherines is a fantastically nerdy coming-of-age road trip that I would recommend to John Green fans and self-proclaimed nerds everywhere, as well as anyone who needs some good life advice.

Recommendations.

Based on the novel I studied, an issue that had been chosen for recommendations is the boy who’s been depressed for get dumped several times which is Colin Singleton. He feels a desperate need for people to remember and appreciate him. A recommendation on this issue is we as a human being we need to know on how to appreciate others most likely to be those people who is close to us such as family and friends by treating them right. Their existence matters. Not until to the point where they desperately need people to remember and appreciate them. We as the people who had known them well, close to them somehow need to understand them more because people who goes through depression needs support as well. Next, talk to them more often. Don’t ignore them because you will make them feel alone until the feel they were born to deserve no one is life. Depression people needs company. A good company. Talk to them about anything as long as they feel they have someone then that is okay. They might need someone to have a conversation with but were so afraid to talk to anyone since they know they will get ignored by the people. Last but not least, as a close friend to the people who goes through depression we need to always cheer them up by not letting them down the same thing Hassan Harbish did. He is the only best friend Colin has. So he took Colin out for a road trip so that Colin can calm himself a little bit.

Awareness and treatment of breast cancer: college application essay help

According to the Centers for Disease Control and Prevention, “About 40,000 women and 400 men in the U.S. die each year from breast cancer,” (CDC, 2016). For ages now, breast cancer awareness has reached out to communities all over the country, yet most of us do not concern ourselves with this particular cause. We tend to not care about this sort of issues in the world unless it happens to be inflicted upon those closest to us, such as our friends, and family. We tend to ignore the fact that we are not totally immune to a certain disease just because it does not show up in our family’s history. Every woman and men has the risk of developing breast cancer, however, this issue can be properly taken care of only when you are fully aware of the disease.

To start off, it is still unclear to researchers as to why breast cancer unexpectedly appears, however, they have come up with some theories that may explain it all, genetic mutation being one of them. Within the article provided by the Mayo Clinic Health Letter it states that “Although only 5 to 10 percent of breast cancers are attributed to inherited genetic mutations, the presence of these mutations can significantly influence the likelihood of developing the disease” (“Mayo Clinic”, 2016). I believe that our genes play a huge role in the presence of all types of diseases and disorders. With an incredibly strong family history of cancer, it has been determined that certain inherited mutated genes, in our case are BRCA1 and BRCA2, will have an impact on increasing the risk of breast cancer. The BRCA genes are initially created to act as a suppressor gene, keeping our cells replicating at a steady pace, but can perform the exact opposite when altered. Our cells will rapidly develop in an abnormal speed, located in the lobules, ducts, or tissues, which will then form lumps in your breast. These abnormal cells are said to be malignant tumors that initially start in the breast, and can spread to the lymph nodes and such. Another cause could be due to our menstruation and age. Once we are exposed to the hormone estrogen, there is then an increase in risk for breast cancer.

Breast cancer is not just limited to the people in the U.S., but has been occurring worldwide for centuries. In Third World countries, they are less likely to develop breast cancer, however, it cannot be said the same for more economically developed countries. Because of the changes in our reproductive factors, our lifestyles, and a rise in life expectancy, the incidence rate for developing countries have greatly escalated. For example, North America is shown to have the highest breast cancer rate in the globe, while the lowest rate would be in East Asia. Therefore, white and African American women have a higher chance than Hispanic and Asian women. In the European Journal of Cancer, it states how “It is generally accepted that breast cancer risk factors, which have mainly been studied in Western populations are similar worldwide. However, the presence of gene–environment or gene–gene interactions may alter their importance as causal factors across populations” (“European”, 2013). This statement is completely accurate, because many countries obtain similar risk factors, such as late childbearing, obesity, old age, avoiding breastfeeding, alcohol, hormone level, diet, and so on. But at the same time, what we intake, our traditions, and even alcohol consumption, which is basically our environment makes a difference. On another note, on the year of 2015, it was presumed that a little more than two-hundred thousand women would be diagnosed with invasive breast cancer, sixty-thousand with non-invasive breast cancer, and about forty-thousand deaths in just the United States alone.

Early detection of breast cancer is crucial in saving a life, so it is first important to know how the disease will present itself. Checking your body regularly is highly recommended for all women. Some symptoms of breast cancer include a lump in the breast, a discharge of the nimples, a breast that has swollen up, skin irritation, and any physical changes in the breast and nipples. Then there is the subject of diagnosing breast cancer, which is a whole other matter. Those with breast cancer would need to go through a breast ultrasound, a mammogram, and an MRI testing. These are all done by a radiological technician. A radiological technician’s job is to use certain machines to capture images of structures deep inside the breast. For a breast ultrasound, sound waves are produced to create sonograms to verify if the lump is either a solid mass or is a fluid-filled cyst. A mammogram is simply a breast screening. And for an MRI testing, the system operates on magnetic fields and radio waves to capture a model of the interior body. Patients can also receive a biopsy, in which they proceed to remove tissue or fluid in your breast, and brings the test to the lab for examination. A biopsy offers a conclusive result; it determines whether your cells are indeed cancerous, the types of cells that are involved, if the cell is aggressive, etc. Once the diagnosis is completed and the patient is positive for breast cancer, the patient next undergoes a process called staging. This helps determine if the cancer cells have spread, and also the stage that the patient is in. It allows your doctor to decide what kind of treatment would be recommended with consideration to your health.

Finally, there are various ways when trying to treat breast cancer. It depends on certain types of factors, such as, the stages that you are in, the type of breast cancer that you have, your general health, and even your preferences that can make a difference. Surgery is typically suggested only towards patients with small size tumors, and those medical procedures are called lumpectomy and mastectomy. These procedures are used as an attempt to surgically remove the entire tumor, however, there are also treatments such as chemotherapy, radiation therapy, and hormonal therapy that can be given after surgery in order to shrink the remainder cancer cells. Chemotherapy is used to end the cancer cells reproducing cycle by utilizing common drugs, for example, methotrexate, vinorelbine, etc. Hormonal therapy on the other hand stops the hormones from reaching to the cancer cells by use of the drug, tamoxifen. It is even stated in the Systemic therapy: Hormonal therapy for cancer article that “5 years of tamoxifen after surgery reduces the annual recurrence rate by 41% and annual mortality rate by 34%.” (Jacinta & John, 2016). It can be used for even more than five years for better results.

In conclusion, being aware of breast cancer will definitely help us become prepared for a surprise appearance. To understand the cause, detect it, and then treat it is something that every woman and men should be aware of. This is not a matter that can be taken lightly. With a great amount of lives already been lost, who is to say that it couldn’t have been prevented with the right amount of knowledge.

Hua-gu-deng Dance

Dance is a universal understanding accepted by all. The varieties of dance forms that exist within the world are infinite. Two interesting and comparable dance styles are Hua-gu-deng Dance, which is a typical Chinese dance form, and ballet, a classical style originating from Europe. Both genres of dance have distinct features that make them unique from each other and other branches of dance.

Ballet originated as a court dance, and then later transformed into a performing art. Ballet has its own terminology in the French language; the language of ballet can be used in any country and it will have the same definition. According to the Atlanta Ballet, A Brief History of Ballet, “The official terminology and vocabulary of ballet was gradually codified in French over the next 100 years, and during the reign of Louis XIV.” At the time, the King of France performed many of the beloved dances. Ballet became a staple art form in countries like Russia, Italy, and France who fostered the importance of ballet. In France, King Louis XIV generated the Académie Royale de Danse, and he established requirements and began certifying instructors. Ballet’s popularity began declining in France after 1830. Today, ballet is still very popular and can be found all around the world. Ballet has still held on to its traditional roots with very little changes to the style. The French language is still used to define movements and the historic technique types have remained the same. The only aspect that slightly differs from historic ballet is the methods used to practice it: for instance, Italy practices the Cecchetti method of ballet. Besides the different methods of ballet, there are sub-categories of ballet; these distinct dance styles all have slight variations, yet they stay true to their roots. One variation of ballet is Neo-classical ballet. Neo-classical ballet popularized in the 20th century by talented individuals such as George Balanchine. This style of ballet is fast-paced, has more energy, can be asymmetrical, does not tell a story, and focuses on aesthetic. On the other hand, classical ballet is graceful and fluid, balanced and symmetrical, it always is a narrative dance, and elaborate costumes and sets are preferred.  Another more modern style of ballet is contemporary ballet. Contemporary ballet is greatly influenced by modern dance. It includes floor work, more body movement and greater range of the bodyline, and it can be danced in pointe shoes or barefoot. During the 19th century, the Romantic Movement was occurring. Most ballets created during this era had endearing, loving themes and they often portrayed women as passive and fragile. In today’s world, ballet has moved away from the constraints of classical ballet and has begun including “plot-less” ballets with darker, deeper meanings.

Classical Chinese dance, more specifically Hua-gu-deng dance, has been around for thousands of years. Hua-gu-deng dance has played a major role in the cultural development of the Chinese; it originated from the Huai River in eastern China. Classical Chinese dance has been around for nearly 5,000 years. With every changing era and dynasty in China, the dance tradition has adapted and combined aesthetics with its distinct dynamic content, rhythms, and narrative. Classical Chinese dance goes back to the Qin Dynasty. Each dynasty that followed the Qin Dynasty created different and specific dance elements. Classical Chinese dance has three main factors that are focused on during training: technical skill, form, and bearing. Technical skill encompasses any acrobatic movements such as flips, jumps, leaps, turns, aerial tricks etc. Form, the second aspect of classical Chinese dance, is referring to the way in which the dancers move their bodies from one movement to another. The movement is usually very circular and full, similar to modern dance: modern dance tends to have round and flowing movements that are loose and asymmetrical. Every movement in the form of classical Chinese dance is choreographed. Breathing is also very crucial to Chinese dance. Dancers are taught how and when to breathe. All movements in this dance form must be round and full. In classical Chinese dance, a vital element named “bearing” is the inner spirit of the dancer. By emphasizing “bearing”, the dancer is able to extenuate the deeper meanings of dance and create a further understanding of the narrative. It is in this “bearing” that classical Chinese dance carries the ancient characteristics of its culture.

How did the Nazi party garner support? – Conformity and obedience

In 1933, Adolf Hitler became the Chancellor of Germany. His Nazi party had grown in size from a small party to becoming the rulers of Germany. The Nazis were fascist and they used their racist ideas as an excuse to commit atrocious crimes. Despite all the crimes they committed, the Nazis were very popular. Hitler got to power despite having ideas that people would not tolerate or support. Indeed we know that the Nazi ideals were racists and bigots, so how did they receive such support from a society of people who were so democratic?

Conformity and obedience plays a big role in this ordeal.

Conformity can be a behavior to follow certain standards that a group may expect.

Conformity can be both good and bad. Every culture has its own practices that other cultures might find a bit “awkward”. Although awkward to one culture, some practices are completely normal in other cultures. Slavery is an example of this. Even at the height of slavery, some cultures detested the idea of keeping a human being in bondage and withhold their freedom from them. These cultures took great strides to outlaw slavery in their land. Indeed different cultures value different beliefs, however, even in the same culture, some people might have different views. During the height of slavery in America, there were those who believed that slavery was wrong, but they themselves owned slaves. This included Thomas Jefferson, who was known to have as much as 175 slaves despite referring to slavery as an “assemblage of horrors”. Despite being fully aware that slavery was wrong, many people participated in slavery because that was the way of life for the culture. The same could be said of the Nazi party. Many Germans believed that the treatment of Jews by the Nazi party was unfair and wrong, however, not many people questioned it. In that period of time and in that society, it was normal for people to think that anyone that wasn’t Aryan was subhuman. Anyone who had different views was thought to be odd. No one dared to question this belief because they did not want to be considered a Jewish sympathiser. Anyone who tried to help the Jews during their persecution was subject to severe repercussions.

The Nazi party that took control of Germany blamed the Jews for the depression that Germany faced and for losing the war. They started implementing laws that limited the rights of the Jews even though Jews were German citizens. Properties that belonged to Jewish households were confiscated and Jews were ordered to concentration camps where six million would go on to lose their lives. How did a country which was known for its democratic idealism succumb to such fascist state? Obedience. Germans believed that Hitler would be the one to bring Germany from their economic depression. Germans were outraged because they felt their leaders betrayed them after the first world war. Hitler promised to bring to Germany political and economic stability, which he did. He was very popular among the German people and so not many people questioned him when he became the Fuhrer of Germany. As Fuhrer, he ordered the Jews to be isolated and sent to prison camps, many not only failed to question his decisions, they supported it. They also supported his decision to invade Poland and eventually France, sparking the second world war. After the war, German soldiers were tried in court and they justified their actions because they said they were just following orders. How could people commit such horrid crimes fully knowing that what they were doing was wrong? Sometimes these may be symptoms of obedience.

The German Jews were a minority group, so it was easy for other German groups to isolate them from the rest German society. Because they were the minority group, Hitler was able to capitalize on that and create an “us vs them” mentality against them. The German Jews were easy to isolate because they had a different religion and culture from average Germans.

Aryan Germans were able to distance themselves from this group which had different. This gave Germans an excuse to place them in a class that was viewed as subhuman. This was the same tactic used by the Europeans to colonize and conquer the rest of the world. They believed that the humans residing in the places they conquered had little to no right to govern themselves because they were subhuman, so they could not possibly be trusted to govern themselves.

Another reason Germans allowed the Nazi party to commit crimes to humanity was because they felt they just following orders from their Fuhrer. Hitler was a man who according to the German people kept his promises. He promised to bring Germany back from the recession and he did; something the German government had struggled with until then. He also promised to restore Germany to the once great nation it had been and to unite all the Germans in the world under one flag. The Germans placed such high hopes in Hitler that they gave him the highest authority in the country. After Hitler was declared the Fuhrer, he was known as the most powerful man in Germany and he was very popular with his countrymen. To defy the orders of the hero of Germany would have been seen as an act of treason. They believed that if they did not obey Hitler, they must not have the best interest to Germany. To defy Hitler was to defy Germany as well. No patriot would want to go against the best interest of his country. Even if they knew their actions were evil, they did it regardless because Hitler ordered it; Germany ordered it. Even if it meant killing innocent people, Germans were willing to follow the orders of their Fuhrer because he represented the collective mind of the whole country. A country is nothing if the citizens can not follow the orders of its leaders.

The Nazi party was a great example of how conformity and obedience could lead us to do things that we may feel is wrong. It was easier for them to commit these crimes because they convinced the majority to think it was ok to do these things. They also used the trust of the people in their government to their advantage. People are more willing to listen to commands if there is a higher authority directing them. Hitler utilized obedience and conformity to rule a country of intellectuals and to lead the country into a war that took so many lives. It is easy to say that we won’t do bad things even if someone forces us, but history says otherwise.

Sometimes we don’t even have to be forced, we just have to believe in authority and isolate groups of people.

Works Cited

Andrews, Evan. “How Many U.S. Presidents Owned Slaves?” History.com, A&E Television Networks, 19 July 2017,
www.history.com/news/ask-history/how-many-u-s-presidents-owned-slaves.
crashcourse. “Social Influence: Crash Course Psychology #38.” YouTube, YouTube, 11 Nov. 2014, www.youtube.com/watch?v=UGxGDdQnC1Y&t=416s.

Influence of the Strange Case of Dr. Jekyll and Mr. Hyde on popular culture: college essay help near me

The Strange Case of Dr Jekyll and Mr Hyde, a title you may not have heard of before but is a story you definitely know. In order for you to understand the topics discussed in this article, you need to understand the plot of the novel, so here is a quick summary.

Basically, there is a well-known doctor named Henry Jekyll who has a lawyer/friend named Mr. Utterson. Mr. Utterson admires his friend very much , but is concerned when Dr. Jekyll has him write up a very strange will naming his entire estate to a man named Edward Hyde, whom Utterson has never heard of before. The will is odd because it states that

“in case of the decrease of Henry Jekyll, M.D., D.C.L., L.L.D., F.R.S., etc, all his possessions were to pass into the hands of his “friend and benefactor Edward Hyde,” but that in case of Dr. Jekyll’s “disappearance or unexplained absence for any period of time exceeding three calendar months,” the said Edward Hyde should step into the said Henry Jekyll’s shoes without further delay and free from any burthen or obligation, beyond the payment of a few small sums to the members of the doctor’s household (Stevenson, 39).”

Utterson begins to investigate Mr. Hyde and is told a story about a brute of a man who knocked down a little girl in the street near where Dr. Jekyll lives, everyone on the street yelled at the rude man, and the man offered to pay a large sum of money to the family of the girl. He then disappeared  through the door of Dr. Jekyll’s home and office, only to return with a large check drawn from Dr. Jekyll’s bank account. Utterson is appalled by this story and goes to talk to Mr. Hyde himself. He hunts down Mr. Hyde and describes him as a man with evil oozing out of his pores. He then asks Dr. Jekyll about these odd arrangements. Dr. Jekyll refuses to comment, and nothing happens for about a year… Skip ahead to one year later where the brutal murder of a popular public politician occurs and Mr. Hyde is the one and only suspect. Everyone tries to hunt down this evil man, but no one succeeds and it is forgotten. But during this whole situation with Mr. Hyde, Dr. Jekyll is in excellent health and is throwing dinner parties for his friends, including a certain Dr. Lanyon. Once again, skip to 2 months later, where Dr. Lanyon and Dr. Jekyll fall terribly ill after admittedly fighting with one another and Dr. Lanyon dies, leaving mysterious documents with Mr. Utterson’s, to ONLY be opened if Dr. Jekyll dies or disappears. Dr. Jekyll remains in seclusion, even though Mr. Utterson visits him often. Finally, one evening, Dr. Jekyll’s butler visits Mr. Utterson at home and tells Utterson he is worried about his employer’s mental state and health and is convinced there was some sort of foul play. The butler persuades Mr. Utterson to return to Dr. Jekyll’s house, where they break into Dr. Jekyll’s laboratory. There they find Edward Hyde dead on the floor and Jekyll nowhere to be found. Utterson finds several documents written to him in the labratory, and goes back home to read what he later finds out is Mr. Lanyon’s narrative and Dr. Jekyll’s narrative, which turns out, is two parts of the same story about Mr. Hyde. These documents tell us that Dr. Jekyll was able to transform into Mr. Hyde by means of a potion that he created and as Mr. Hyde, he discovered a world of pleasure and crime. In his story, Dr. Jekyll writes that Mr. Hyde became very  powerful and very harder to control, in the end the dominant personality beat out the weaker one.

“I guess we’re all two people. One daylight, and the one we keep in shadow.”

— Bruce Wayne/Batman, Batman Forever

That is a very basic summary of all the important plot points in the story but it is the two people inside one body that you most likely recognize. In today’s popular culture, this story makes itself known very frequently and all exmaples stem from this original “split personality story”, The Strange Case of Dr Jekyll And Mr Hyde! A few current examples of this story in today’s popular culture are:

The Hulk, also referred to as The Incredible Hulk is a character from the Marvel Comic Universe created in comic book form in 1962. The nuclear Physicist Dr. Robert Bruce Banner is caught in the a blast of a gamma bomb that he created. This nuclear blast creates a alternate personality/physical distortion within himself named Hulk; a giant, green angry monster. The character, both as Banner and the Hulk, is often pursued by police or armed forces, usually because of the destruction Hulk causes. The powerful and monsterous emotional alter ego of an emotionally repressed scientist who comes forward whenever Banner experiences emotional stress, is an example of the Jekyll and Hyde motif. While the Hulk usually saves the day, seeking usually to protect, his terrifying nature drives Bruce Banner into isolation, much like Jekyll, fearing discovery. Stevenson’s book was also the inspiration behind Two-Face, a villain created in 1941 for the Batman comic book series. Harvey Dent, an upstanding citizen and DA, was horribly scarred on one side of his body and traumatized in a warehouse fire set by The Joker. This caused his formerly repressed “Hyde” personality to emerge. The two personalities come into direct conflict often and make decisions they are split on using the outside moderator of a flipped coin. Bane is another character from the DC Comics universe and another villain from the Batman comic series. Shrouded in mystery, Bane appeared in Gotham City with the one goal to eliminate Batman once and for all. Besides being a man of great physical size and power, Bane’s strength is augmented by “Venom,” a Super Steroid that increases his strength, physical size and durability for limited periods of time. Much like Dr.Jekyll turns himself into Hyde using a potion, the Venom potion, injected into his body is also his weakness — when the supply of the chemical is cut he goes back to normal and loses his powers. I also see a huge parallel between Jekyll and Hyde and the most iconic movie villain of all time, Darth Vader. Just like Dr. Jekyll, Anakin Skywalker has his alter-ego. In EPISODE V, Yoda tells Luke Skywalker “Anger, fear, aggression; the dark side of the Force are they. Easily they flow, quick to join you in a fight. If once you start down the dark path, forever will it dominate your destiny, consume you it will” — just like when Jekyll first transformed into Hyde and then he felt the urge to do it again and again until finally he lost control over the transformation and ends up as Hyde permanently. Similarly, Anakin Skywalker first tastes the power of the dark side when he kills an entire camp of Sand-people to protect his mother and this starts his fall to the dark side and his eventually transformation into Darth Vadar. Another Marvel Comics supervillain was named after and based on Mr Hyde. Calvin Zabo was born in Trenton, New Jersey. He was a morally abject yet brilliant medical researcher who was interested by the effect of hormones on human physiology. One of his favorite books was The Strange Case of Dr Jekyll and Mr Hyde. He was convinced that the experiment in the book could actually be performed and became obsessed with the idea of letting loose his full beast-like nature in a superhuman form. He was eventually successful in creating the formula, and turned into a huge, Hulk-like creature he named “Mister Hyde”. The character of Jekyll and Hyde can be seen in Alan Moore’s comic book, The League of Extraordinary Gentlemen. In the comic, interesting team of crimefighters, made up of famous characters from classic literature, fight crime in Victorian London. In the issues Hyde is very strong and has a Jekyll persona, whereas in the novel, Jekyll has a Hyde persona. Sometimes in film, television, literature, or theater, a character and his evil twin, evil counterpart, or shadow archetype (all different titles for the same type of character) are really the same guy in the end or sometimes, a completely different character is sharing body space with another. The point is, the villain sometimes lives inside the hero’s body, therefore hiding in plain sight. For the entire story, the hero is trying to catch himself; which has created many of the detective stories you read today. You can also see this idea in many different pop culture examples. If the two personalities are aware of each other, it becomes a case of Gollum Made Me Do It.

A character has another personality to keep him company, the other personality isn’t exactly a model citizen. However, he is… persuasive. He often finds himself being bullied or forced into following his darker side’s advice, even if it’s advice he wouldn’t have followed normally.

The Hyde personality’s crimes are outside of Jekyll’s control and, often, the character is unable to stop themselves from becoming “evil”, this is often a case of being Driven to Villiany.

Sometimes, your villain’s just a normal guy who’s brought into villainy against their own will. Don’t get confused with mind control or possession, it’s because they’ve been warped by events happening around them, and forced into villainy by forces outside their control. A broken shell of a human being, the only thing left is insanity.

Sometimes they’re not really evil but, occasionally this can be resolved with a Split-Personality Merge that reconciles both sides into a healthy whole.

There are many possible reasons for the existence of these split personalities, but this co-habitation is rarely peaceful or long lasting. It usually results in a battle of the central mind to try and find out which personality will take over. Sometimes, the winning personality does not reduce the loser to a small, powerless voice but, instead offers to become one again; they merge into a single, whole person that is greater than the sum of its minds.

Also, the Jekyll side isn’t necessarily “good” either. Comes, of course, from The Strange Case of Dr. Jekyll and Mr. Hyde, by Robert Louis Stevenson. It used to be a twist ending, but it no longer suprises anyone. Most adaptations of the work focus on said twist. The real life example of Deacon Brodie is said to have inspired Stevenson. William Brodie or Deacon Brodie was a Scottish cabinet- maker, deacon of a trades guild, and Edinburgh city councillor, who maintained a secret life as a burglar. As did the story of Horace Wells, a pioneer of medical anaesthetics. While researching the chemical formula, chloroform, Wells tested many of the various dosages on himself. Because of this, Wells unknowingly built up a dangerous level of the drug in his system, and ended up attacking two prostitutes during a sulfuric acid drug related episode. Once he sobered up and learned of what he had done, he committed suicide.

Doctor Horace Wells born January 21, 1815.Along with many comic book characters, there are examples of Jekyll and Hyde’s story in one of the most popular shows of the past few years, American Horror Story. American Horror Story (AHS) is a show that uses so many of the important details that make up the story The Strange Case of Dr. Jekyll and Mr. Hyde in the show’s many series. In season one titled, Murder House, there is a character named Dr. Charles Montgomery who is a “surgeon for the stars” and the original builder of the “murder house”. In the series, his character is technically a ghost but we do get flashbacks to when he was alive. The Jekyll and Hyde connection is that the Doctor becomes addicted to the drug Ether and starts to lose his mind and kill is patients without realizing it. He is later on shot and killed by his wife after he tries to stitch their dead and dismembered son back together Frankenstein-style.

In season five of American Horror Story titled Hotel, there is another Jekyll and Hyde like character/storyline named the Ten Commandments Killer. Season five basically revolves around a LAPD Detective named John Lowe, played by actor Wes Bently, trying to hunt down the Ten Commandments Killer.

Now before I continue with the storyline and connection to Stevenson’s novel, let me explain the story of the Ten Commandments Killer’s and his MO.   The original Ten Commandments Killer was a man named James March, designer and owner of the Hotel Cortez (the main setting for the entire season), which opened on August 23, 1926. James Patrick March was born in 1895 and started killing people in 1920. He was described as a man of new money and he decided to build and open a grand hotel to make it easier to kill people without getting caught. He built many secret rooms and hallways into the hotel to allow for more killing and he used the hotel’s infrastructure to hide all the evidence of the crimes. His wife Elizabeth knew all about his murdering and actually enjoyed the sounds of his victims screams, so she encouraged his dark habit. There are many gruesome details to the murders he committed but most of his early murders in the hotel involved playful, thespian-esque ways. The actual Ten Commandments killing started with March when he explained to one of his victimes that he despised religion and that it was the worst thing in the world. March said he was going to have to kill God, because as long as there was a God, men like himself would never find peace. His hate of religion is what gave him the motivation to collect all the bibles from the hotel bed stands and arrange them with a pile of his victims to leave behind for the police; this is where the Ten Commandments murders started. But on February 25, 1930 an anonymous phone called tipped off the police and they came to the Hotel Cortez to arrest March. Before the police could arrest March however, he killed his servant and slit his own throat leaving the Ten Commandment murders unfinished. March, along with all of his victims and numerous other victims of the hotel are trapped in the hotel as ghost that appear to guests and interact as characters in the show.

This is where the character John Lowe comes into play in the show. As previously stated, John is a LAPD officer trying to solve the case of the Ten Commandments Killer, but in 2010, John visited the Hotel Cortez on a drunken night and the ghost of James March sees potential in him to finish his work as the Ten Commandments Killer. It wasn’t until 2015 when John finally agrees to complete the murders and this is where the season begins. Each murder symbolizes one of the ten commandments, for example the first murder is Thou Shalt Not Steal and the victim is a infamous thief whom is killed, and for each murder something is taken from each victim and places it in a glass jar in Room 64 of the Hotel Cortez, so for the first murder the thief’s hand is cut off. James March was able to complete two of the ten murders in 1926 but then John Lowe finished off the other eight in 2015. The connection I see to Jekyll and Hyde in this whole story is the fact that John has no recollection of committing any of these murders or even his first time at the Hotel Cortez in 2010. It isn’t until the second to last episode where John finally remembers that he has been doing all this and has a psychotic break and is eventually killed by the SWAT team in the last episode. When watching the season, you can actually see a physical change in John throughout the season as more and more of the ten commandment murders happen. His eyes sink in, be becomes pale and loses weight, his clothes are wrinkled and he just looks physically exhausted more and more as each episode happens. It isn’t until that final episode that his appearance is like this because his good personality is losing strength as his evil, murderous personality is slowly taking over and killing more people.

The scene where Detective John Lowe suddenly remembers all the murders he has committed as the “Ten Commandments Killer” that he has been so desperately searching for at his day job in the police force. Along with the story of Jekyll and Hyde inspiring so many different movie and television characters and plot schemes, the 1931 film version of The Strange Case of Dr. Jekyll and Mr. Hyde made movie history with it’s incredible never before seen or done on screen transformation (see the video below). Fredric March, the actor who played Jekyll and Hyde in the movie, actually won an Academy Award for his performance in the film. Film directors and makeup artists everywhere wanted to know the secret behind the scene but it wasn’t until 1970 when director Rouben Mamoulian described how it was done: it was done with colored make-up and matching colored filters, which were removed or added to the scene to change March’s appearance. Since the film was in black-and-white, the color changes didn’t show.

The 1931 transformation scene that rocked the film industry and won actor Fredric March an Academy Award. All in all, The Strange Case of Dr. Jekyll and Mr. Hyde by Robert Louis Stevenson has had a HUGE influence in popular culture since it’s first publication in 1886. You can see it’s influence in television, movies, horror makeup, comic books, theater, and so much more. This storyline is here to stay and will probably be influencing popular culture for generations to come.

Franklin D. Roosevelt heroism

Villainification is the process of creating original actors as the faces of systemic harm, with those hyper-individualized villains losing their shared characteristics. Like heroification, there is a simplified portrayal of historical actors, but villainification has particularly harmful consequences. We suggest that villainification obscures the way in which evil operates through everyday actions and unquestioned structures because of the focus on the whim of one person. Although it is unfortunate that we do not often see how we can inadvertently help others and make systemic change, it is disconcerting when we fail to look at our part in the suffering of others. In this paper, I will try to unravel Franklin D. Roosevelt heroism which was the President of the United States where he served through the Great Depression and the Second World War and received the “hero” treatment.

Franklin D. Roosevelt was elected during the height of the Great Depression in 1932 and remained President until his death in 1945. During this period of the presidency, he oversaw an expansion of the Federal Government and helped America lose its isolationist stance as it joined World War Two and helped formulate the United Nations. He was an influential figure in both American and world politics.

Roosevelt came from a privileged background but was influenced by his headmaster at Groton School in Massachusetts, who taught the importance of Christian duty in helping less fortunate people.

Franklin married a distant cousin Eleanor in 1905. They had six children in quick succession, two of them who went on to be elected to the House of Representatives. FDR has several affairs outside of his marriage including Lucy Mercer, his social secretary.  His wife Eleanor offered a divorce at one point, but for a variety of reasons, it was not taken up. She later became a dedicated wife/nurse during Franklin’s moderate disability brought on by polio.

When FDR was elected president in 1930, America was facing an unprecedented economic crisis; unemployment was reaching 25%  – Furthermore, government unemployment relief was insufficient at the time. There was real financial desperation, and many classical economists were at a loss as to how to respond.

To some extent, FDR pursued an expansionary fiscal policy as advocated by John M Keynes. The government borrowed, levied a national income tax and spent money on public works (known as the New Deal). This period also marked a shift in power from local governments who could not cope to the national government. Roosevelt also helped introduce legislation protecting worker’s rights. The new deal in no way solved the economic crisis, but it did mitigate some of the worst effects, creating employment and eventually kick-starting the economy. By the end of the 1930s, some sectors of the economy such as construction were booming.

FDR was keen for America to become a good citizen of the world and fight for individual freedoms. However, in the early 1940s, America still retained a powerful isolationist approach, and he campaigned for re-election promising to stay out of World War Two – despite his dislike of Nazi Germany. The bombing of Pearl Harbour in December 1941, completely changed the outlook of America. F.D.R wasted no time in declaring war on Japan and then Germany as well.

“In these days of difficulty, we Americans everywhere must and shall choose the path of social justice…, the path of faith, the path of hope, and the path of love toward our fellow man.” ~ Franklin D. Roosevelt

Once America had entered the war, they entered whole-heartedly into both arenas – the Pacific and Europe. In the D Day landings of 1941, America supplied roughly 2/3 of the troops. Roosevelt was an astute Commander in Chief. In particular, he was able to identify generals with genuine talent and promoted them to key roles. As Roosevelt said himself:

“I’m not the smartest fellow in the world, but I can sure pick smart colleagues.”

In particular, FDR promoted Dwight Eisenhower and George Marshall – both to play critical roles during the Second World War.

Roosevelt’s real political skill lay in his powers of communication and identification with ordinary people. His radio fireside chats were instrumental in building confidence with the American people, both during the Great Depression and during the Second World War.

“This great Nation will endure as it has endured, will revive and will prosper. So, first of all, let me assert my firm belief that the only thing we have to fear is fear itself — nameless, unreasoning, unjustified terror which paralyzes needed efforts to convert retreat into advance.” – 1933

Roosevelt had a close relationship with Winston Churchill. There was a high mutual admiration. At one point Roosevelt said ‘It is fun being in the same decade as you.’

Together with Churchill and Stalin, the Big Three helped lay the foundations for the post-war period, which included the setting up of the United Nations – a successor to the League of Nations.

Roosevelt died unexpectedly from a massive brain haemorrhage in April 1945, just before the first meeting of the United Nations. His death stunned the world, and he was remembered as a champion of freedom and a man of humanity and optimism.

I’ve never understood the reverence for Franklin Delano Roosevelt. He gets points for picking great Generals and led this country to victory in WWII. But he totally mismanaged the economy, during the recession of 1937 unemployment reached 19% (the excellent depression high was 25%), his freedom-sapping policies never did get this country out of the Great Depression, and don’t forget that he tried to circumvent constitutional separation of powers (now who does that remind me of?). And then there is the issue never discussed, he was a bigot, his hatred of Jews caused thousands to be added to the ranks of Hitler’s victims, and his hatred of Asians convinced him to put Japanese Americans into internment camps.

Some point to the fact he didn’t he bomb and destroy the train tracks that were shipping Jews to the concentration camps? But my opinion sides with the people who say that wouldn’t have worked. The real question to be explored was why didn’t allow more Jews into the country and why didn’t he pressure Britain to enable Jews to move from Nazi-controlled areas into what was then called Palestine?

In the book “FDR and the Holocaust: A Breach of Faith,” historian Rafael Medoff suggests that Roosevelt failed to take relatively simple measures that would have saved significant numbers of Jews during the Holocaust because his vision for America was one that had a small amount of Jews. In other words, FDR doomed many Jew to suffer not because he wanted them to die, but because he didn’t want a lot of them living in his neighborhood.

Loewen argues that this heroification is something that enables readers and teachers to overlook the conflicts that will allow a full reading of historical narratives and bring in other points of view. The heroification process is done to make textbooks more appealing to school districts and also to present an artificial exceptionalism view of American History. At the same time, heroification enables students to assume a role of passivity in constructing the next wave of American social and historical dynamics. If all that is read are about heroes, it creates the mentality that there is nothing left to do and this enables those in the position of power to continue doing what they do without any questioning or in-depth analysis.

Joseph Paul Franklin: college essay help online

White supremacy is a form of vile racism where white people are perceived as superior to all other races in every physical, mental, social, economic, and political aspect. This repugnant mindset dates back in United States’ History to centuries ago, but unfortunately still exists in the minds of people today. White supremacy is clearly very wrong, however it is important to be aware that it can be very dangerous when it is implemented by the mentally ill. John Paul Franklin used white supremacy as a stimulus for unethical, malicious and remorseless actions that lead to the death of at least 15 people in 11 different states. (FBI, 2014) Franklin’s three-year killing rampage was motivated by his “pathological hatred of African Americans and Jews”. (Montaldo, n.d.) Joseph Paul Franklin was a perfect example of how abusive households can lead to serious psychological issues such as mental illness, which in turn can lead to extreme violence.

James Clayton Vaughan was his birth name. Born into a poor family in Alabama, Franklin was physically abused by both of his parents throughout his entire childhood. He once told investigators, “My momma didn’t care about us” and stated that he and his three siblings were not fed properly or “allowed to play with other children”. (Nye, 2013) While in high school in the 1960’s, Franklin became interested in southern white supremacist groups and went on to becoming an active member of The Ku Klux Klan (KKK), The American Nazi Party, The National States Rights Party, and The National Socialists White People’s Party. His interest in these groups started when his obsession with evangelical Christianity and Nazism took off in his early high school years. Franklin changed his name in 1976 when he wanted to join the Rhodesian Army but couldn’t due to his criminal record. Franklin proceeded to choose “Joseph Paul” in honor of Joseph Paul Goebbels, Adolf Hitler’s minister of propaganda. He then chose “Franklin” in honor of the US founding father, Benjamin Franklin. He never ended up joining the army, and instead started a battle between him and every minority that he could get his hands on. (Montaldo, n.d.)

Franklin became more and more aggressive towards minorities as he got older, to the point where he “rejected the most radical hate groups because he didn’t think they took their hatred far enough”. (FBI, 2014) He felt that sitting around and complaining about the supposed “inferior” races wouldn’t do any good- he thought it was more effective to actually go out and kill them. He was constantly looking for opportunities to “cleanse the world” of races that he felt were inferior. Blacks and Jews were the primary races that Franklin went after, and he considered interracial couples to be even worse. (FBI, 2014)

Franklin was born on April 13, 1950. He was a high school dropout, and had a daughter after getting married in 1969 at the age of 19. (FamPeople, 2012) He became an abusive husband and got a divorce not long after. (FBI, 2014) The abuse he had towards his family is a direct result of the physical abuse he faced as a child. Child abuse has a direct relationship with mental health, and can be the cause of any kind of mental illness. (Szalavitz, 2012) Franklin’s actions were inexcusable but can definitely be linked to the abuse he endured as a child. Franklin was treated as inferior throughout his entire upbringing, and he transferred this energy from pain into hate. He used white supremacy as an outlet for his hatred. His obsession with hate allowed him to feel superior to other races, however this was probably the one thing that ever allowed Franklin to feel superior to other people.

Since Franklin was a high school dropout, he couldn’t carry a stable job. To keep himself afloat, Franklin robbed multiple banks up and down the East Coast. In between robberies, Franklin sold his blood and sold/traded guns. (FamPeople, 2012) He spent most of his time plotting to kill minorities as well as interracial couples. His killing rampage began in 1977 at the age of 27, and ended in 1980 when he was arrested at the age of 30. (FBI, 2014) He has been linked to or associated with many murders, some of which he was not arrested for or convicted of. He confessed to the murders of 20 people, some of which are believed to be untrue. (Montaldo, n.d.) This is one of the many reasons that defense lawyers claimed Franklin to be a “paranoid schizophrenic” that was not fit to stand on trial. (BBC News, 2013)

Franklin was officially convicted of nine murders, however was a suspect in another twelve. Eight of these convictions resulted in a life sentence. However, Franklin was sentenced to death by lethal injection by the state of Missouri for the murder of Gerald Gordon in 1997.

(Vitello, 2013) Gordon’s murder was just one of Franklin’s attacks on a synagogue. He chose Synagogues as his primary target for the single purpose of killing Jews. Gordon’s death occurred on October 8th, 1977 in Potosi, Missouri. Franklin took five shots at both Gerald Gordon as well as a man named William Ash while they were walking through the synagogue parking lot. He killed Gordon and injured Ash using a Remington 700 Hunting Rifle. He was then sentenced to death the following February. (Montaldo, n.d.) Franklin told investigators that he selected this synagogue at random. (Vitello, 2013) He also said that his primary goal in the event was to “find a Jew and kill him” (Nye, 2013) Franklin bombed another synagogue in July 1977 that was located in Chattanooga, Tennessee. Unlike the Missouri attack, nobody was injured that day. (BBC News, 2013)

Franklin did not confess to Gordon’s murder until 17 years after the incident while in a prison cell talking to an investigator. (Vitello, 2013) This is just one of many instances where Franklin’s story has changed, and that is the primary reason why the court has been unable to convict him of some of the other crimes he has supposedly committed. Some of the 22 murders that he has confessed to haven’t even been brought to court because of lack of evidence. (Montaldo,1) Franklin has also robbed about 16 banks in order to “fund his activities” (BBC, 2013)

Franklin was a drifter and often “floated up and down the east coast” planning his next attack. He carried a sniper rifle and his main target was “MRC’s”, or mix-raced couples.

(FamPeople, 2012) His most well-known crime involving interracial couples was the attempted Murder of Larry Flynt, a publisher for the Hustler magazine. (Vitello, 2013) Franklin went after Flynt because of the cover of the December 1975 issue of Hustler showing an interracial couple having sex. Franklin stated to CNN, “I saw that interracial couple he had, photographed there, having sex”. He then proceeded to say, “It just made me sick. I think whites marry with whites, blacks with blacks, Indians with Indians. Orientals with Orientals. I threw the magazine down and thought, ‘I’m going to kill that guy’”. (Nye, 2013) This quote shows Franklin’s extreme, obsessive hate towards interracial couples and how it correlates with his mental instability. Anyone that feels the need to murder someone because of their skin color, or because of the skin color of their significant other, clearly isn’t mentally stable enough, or safe enough, to be roaming the world by his or herself. Franklin’s freedom was a threat to lives of every nonwhite person in the country.

Franklin’s psychiatrist, Dorothy Otnow Lewis, was one of a few people who testified that he was unfit to stand on trial. Lewis stated that he was a delusional thinker due to the abusive childhood that he endured. One example of this irrational thinking was when he claimed that God wanted him to “start a race war”. (FamPeople, 2012) However, the court still convicted him for his crimes and sentenced him to death. Franklin was held on death row in Missouri. Clearly Franklin was thinking straight enough to plan his attacks as well as his escapes ahead of time, and he was able to avoid law enforcement for years. Many of Frankin’s escape methods included dying his hair, changing clothes, and changing vehicles. He would plan his escape paths ahead of time and make sure that he left no evidence. (FamPeople, 2012) However, the FBI was becoming closer and closer to catching Franklin by 1980. In September of that year, a Kentucky police officer noticed a car in the back of Franklin’s car. An outstanding warrant appeared during a records check, and he was then brought in for questioning and detained. He was able to escape detainment but was recaptured again not long after. Franklin was finally caught for good in 1980 when a nurse, who was drawing his blood, recognized an eagle tattoo on his arm and called the Police. (FamPeople, 2012)

Another one of Franklin’s attacks on an interracial couple occurred in Madison, Wisconsin. Alphonse Manning and Toni Schwenn were pulling out of a parking lot of a shopping mall. Franklin crashed into their car from behind, got out, and shot both 23 year olds to death. (Montaldo, n.d.) Another instance occurred in Cincinnati, Ohio on June 6th, 1980. Franklin was standing on an overpass waiting for an interracial couple to pass by. Franklin had planned this attack, so he knew that the couple should eventually be there. While he was waiting, Franklin became impatient and shot his cousins Darren Lane, age 14, and Dante Brown, age 13, while they were walking into a convenience store. Both children died and Franklin was charged with two life sentences. (Montaldo, n.d.) This instance shows Franklin’s short temper and yearning for violence. Franklin shot two innocent children of his own blood because he was getting impatient. That alone shows true mental illness, because anyone in their right mind wouldn’t be waiting on an overpass to commit those murders to begin with. His lack of patience and reliance on violence shows mental instability by itself, and his extreme racism and obsession with white supremacy infinitely multiplies the level of danger that he creates for those around him.

Larry Flynt was paralyzed from the hips down after Franklin attacked him. However, Flynt didn’t believe in the death penalty and actually fought against Franklin being put to death. Flynt stated, “The government has no business at all being in the business of killing people” he then told investigators that he believes, “It’s much more punishment to put somebody in prison for the rest of their lives than it is to snip their life out in a few seconds with a lethal injection”.  (Nye, 2013) Oblivious to the fact that Flynt was not trying to help him, Franklin referred to Flynt as “old pal” in regards to his opposition to his death sentence. Franklin’s mental instability is evident in this instance; Franklin seems to have thought that Flynt’s opposition to the death sentence was not because of Flynt’s conservative political views, but because somehow, Flynt was now on his side.

On May 29th 1980, Franklin was charged with the attempted murder of African American civil rights leader, Vernon Jordan. (BBC News, 2013) He committed this crime after seeing Jordan, who was black, with a white woman in Fort Wayne, Indiana. (FamPeople, 2012) He previously had threatened to kill President Jimmy Carter for his pro-civil rights views, along with Jesse Jackson. He realized that the security that protected these two men was too tight, and so he went on to murder Vernon Jordan instead. (FamPeople, 2012) Franklin was clearly an impatient, impulsive character that acted strictly on random, unethical reasoning. Franklin’s sister informed investigators that he was the target of the majority of the abuse in their dysfunctional household. She also added that Franklin used to read fairytales in order to escape the domestic abuse that he endured on a daily basis. (Montaldo, n.d.) This was definitely one of the main reasons for Franklin’s evident mental illness; Franklin used white supremacy as an outlet for his prolonged childhood anger and frustration.

On July 29th, 1978, Franklin shot Bryant Tatum and his girlfriend, Nancy Hilton with a 12-gauge shotgun simply because they were an interracial couple. The attack happened at a Pizza Hut in Chattanooga, Tennessee and unfortunately resulted in the death of Tatum. Hilton was able to survive but was injured, and Franklin was given a life sentence. (FamPeople, 2012) On July 12th, 1979, Franklin shot Harold McIver through a window. McIver was a 27 year old black man that unfortunately was killed in the incident. McIver was a manager at Taco Bell in Doraville, Georgia and according to Franklin, McIver came in close contact with white women. Franklin, as a result, felt it was his responsibility to murder the innocent man. (FamPeople, 2012)

One of the most outrageous parts of Franklin’s criminal history is that he was committing these horrible crimes because he thought he was doing his job. He once told CNN Investigators, “I consider it my mission, my three-year mission. Same length of time Jesus was on his mission, from the time he was 30 to 33.” (Lah, 2013) When CNN investigators asked him to clarify what his mission was exactly, he replied, “To get a race war started”. (Lah, 2013) Franklin thought it was his responsibility to brutally murder every person that was black, Jewish, or in an interracial relationship. On June 25th 1980, Franklin killed Nancy Santomero, age 19, and Vicki Durian, age 26 using a .44 Ruger Pistol. Both women were hitchhikers in Pocahontas County, West Virginia at the time. Franklin confessed to the crime in 1997 but felt that he did what was necessary. (FamPeople, 2012) Both girls were white, however he decided to murder them both once he heard one of the girls say that she had a black boyfriend. Jacob Beard, a Florida resident, had been incorrectly convicted and imprisoned for these murders. In the year 1999, Jacob Beard was freed and a new trial was to be created on Franklin’s behalf. Franklin was then correctly convicted of the crime, and was given a life sentence as a result. (FamPeople, 2012)

Franklin confessed to almost all of the murders that he committed because he felt he was doing right by his people. After he abandoned the most extreme white supremacist groups because he felt that they were not radical enough, he went on to commit these crimes because he thought that other white supremacists would follow him. He stated to reporters, “I figured once I started doing it and showed them how, other white supremacists would do the same thing”. (Nye, 2013) He claimed that after his attacks, he now has members that love him. He said to investigators, “When you commit a crime against a certain group of people, a bonding takes place. It seems like you belong to them.” (Nye, 2013) This sick feeling of family that Franklin received from his white supremacist groups was probably more of a closely-knit environment than the blood-related family that he had at home. This is most likely what drew Franklin so far deep into the racist cult.

Franklin shot and killed 15-year old prostitute Mercedes Lynn Masters on December 5th, 1979. He had been living with her in Dekalb County, Georgia, but decided to hill her when she told him that she previously had black customers. Two more murders were committed by Franklin on August 20th, 1980. Franklin killed two black men in Salt Lake City, Utah. He was near Liberty Park when he took the lives of Ted Fields and David Martin. He was charged with first-degree murder, convicted, and was given two life sentences. He was also tried on federal civil rights charges. These instances, along with multiple others, are just examples of the sick, twisted things that went on in Franklin’s head. Mental illness was evident, and his merciless actions are what made him so dangerous.

It was also evident that Franklin was completely self-centered and delusional. His reference to Flynt being an “old pal” and his comparison between Jesus and himself are just two examples of how deranged and neurotic that the high school dropout was. Franklin even said that he hoped his killings would act as an example. (Nye, 2013) The three-year mission that Franklin referred to took place when he was age 27 up until he was arrested at age 30. He told authorities that he has no regrets, and that the only regret he has is that killing Jews isn’t legal. He later told investigators that the only regret he has is that some of his victims managed to survive. (Montaldo, n.d.) Franklin had been in prison for over 30 years before he was finally executed. Not long before his execution, Franklin claimed that he was no longer a white supremacist and had “renounced his racist views”. (BBC News, 2013) Franklin claimed he had “interacted with black people in prison” and stated, “I saw they were people just like us”. He also added that he knew his actions were illogical and were a result of “an abusive upbringing”. (BBC News, 2013) Joseph Paul Franklin was sentenced to death on February 27th, 1997. He was on Missouri Death Row until August 20th, 2013, when the State of Missouri set the date for his execution. Franklin was executed by lethal injection on November 20th, 2013 at 6:08 AM. (Missouri Death Row, 2008) It took 10 minutes for Franklin to be officially pronounced dead. (BBC News, 2013) According to the jury, Franklin’s actions were a result of “depravity of mind”, better known as mental illness. (Missouri Death Row, 2008) Mental illness can be a direct result of child abuse. The life, the actions, and the attitude of Joseph Paul Franklin are a perfect example of that.

Works Cited

BBC News. (2013, November 20). BBC News US & Canada. Joseph Franklin, white supremacist serial killer, executed: Retrieved from bbc.co.uk/news/world-us-canada-25016217
FamPeople. (2012). Joseph Paul Franklin: Biography. Retrieved from FamPeople: http://www.fampeople.com/cat-joseph-paul-franklin
FBI. (2014, January 14). Serial Killers Part 4: White Supremacist Joseph Franklin. Retrieved from fbi.gov: https://www.fbi.gov/news/stories/2014/january/serial-killers-part-4-joseph-paul-franklin/
Lah, K. (2013, November 19). Serial Killer Joseph Paul Franklin Prepares to Die. Retrieved from CNN News: http://www.cnn.com/2013/11/18/justice/death-row-interview-joseph-paul-franklin/index.html
Missouri Death Row. (2008, December 9). State of Missouri vs. Joseph P. Franklin. Retrieved from Missouri Death Row: http://missourideathrow.com/2008/12/Franklin-Joseph/
Montaldo, C. (n.d.). Profile of Serial Killer Joseph Paul Franklin. Retrieved from About News: http://crime.about.com/od/hatecrimecriminalcases/a/josephfranklin.htm
Nye, J. (2013, November 19). Racist Serial Killer Shows No Remorse In Final Interview On Eve Of His Execution- Even Joking Larry Flynt, Who He Paralyzed, is “‘Old Pal” For Campaign Against Death Penalty. Retrieved from Daily Mail: http://www.dailymail.co.uk/news/article-2509759/Joseph-Paul-Franklin-shows-remorse-ahead-death-penalty.html
Szalavitz, M. (2012, February 15). How Child Abuse Primes The Brain For Future Mental Illness. Retrieved from Time: http://healthland.time.com/2012/02/15/how-child-abuse-primes-the-brain-for-future-mental-illness/
Vitello, P. (2013, November 13). White Supremacist Convicted of Several Murders Is Put To Death In Missouri. Retrieved from New York Times: http://www.nytimes.com/2013/11/21/us/joseph-paul-franklin-executed-in-missouri.html?_r=0

Why did Hitler target Jews?: writing essay help

One man in control of 65 million people at once during the 1930s is pretty incredible. But how incredible is it really, if this power is used for, what many people today consider is, evil. Adolf Hitler was a dictator in Germany that would eventually become known for how intense he believed in creating a perfect race.

Hitler was born in Austria and would eventually go to Germany, for many reasons, to take over the office and begin his extermination in search for the perfect race.  During all of World War II and a few years before that starting in 1933, Hitler was able to successfully capture and kill millions of people. The group of people Hitler mainly killed off were Jews because he didn’t consider them of the superior race, in his opinion the superior race was the Aryan race. Not only were Jews part of a massive genocide, but anyone who was disabled, homosexual, or gypsy were in danger of being captured and taken to concentration camps.

The night of the broken glass is a day that can be seen as the day that truly began the genocide in Germany because people were being taken away from their homes in mass amounts. In November 1938, Ernst Eduard vom Rath, was murdered by a Jewish teenager causing for police in Germany to begin entering houses and looking for any Jew who had weapons in their possession.  Hitler saw the killing of this German Diplomat as a threat against the Nazis by the Jews, and so began the Holocaust.

For over 10 years millions of people were taken away into concentration camps all over Europe, but there really can’t be an exact number as to how many were captured and killed because who knows if others were killed outside of concentration camps or used for experimentation, for now the number that is used as an estimate is 11 million people killed over a period of 12 years, 6 million were only Jews.

The goal of this research is not to focus on Hitler and how he governed Germany and what his political views were in the world, but rather look at how he grew up and how he was able to capture millions of people to kill them off, just to have his perfect race, and why? The main question is, why did he mainly target Jews? For one person to have control of about 65 million people and how they should be living their day to day life is pretty incredible.  But the way Hitler went about making these people live did not seem like a very good idea, considering that Hitler was a very intelligent person.

Anti-Semitic views have been around since the time of Ancient Rome, which is interesting when we look back at because all these years have passed and there still seems to be a prejudice against Jewish people.  While Jewish people are not the only group that face prejudice or discrimination, this group has had a tremendous impact on the history of the world because of the way they were treated during the Holocaust by Hitler, while it is not comparable to the slave trade during the sixteenth and nineteenth century, it is something that still amazes people because of the way it was executed.

Adolf Hitler was the leader of the Nazi Party in Germany during the 1930s and 1940s. During his time as leader, he rose to a high enough power that he began to order for the extermination of the Jews.  Hitler is one of the most famous cases of genocides that is known in history today because of the amount he was able to successfully murder from 1933-1945. In history class, students are taught about WWII and how Germany’s defeat caused the end of the Holocaust. What many never wonder is why he did it; the amount of people that were murdered by Hitler and his Nazi group is still not exact because not only were Jewish people murdered, but anyone of inferiority to Hitler or his Aryan race.  Like mentioned before, anyone with physical or mental disability were also taken to concentration camps because they were people who could destroy the perfect race.

Starting from when Hitler was first a child, he went through physical abuse at home. Hitler’s father would beat him because Adolf would find ways to taunt his own father and make him mad at Hitler. While this all happened, Hitler’s mother, would make him feel better and make sure he was okay because like most mothers, their instinct is to make sure their children are never hurt.  While this might not be a contributing factor as to why Hitler’s main goal was to exterminate all Jews, this can be part of a reason many thought his views were insane; this instability at home definitely seemed to cause instability within himself and possible feelings and affection he could feel towards other human beings.

As Hitler grew up, it was evident that Hitler never cared for school work and would much rather learn about art and music as much as he could. According to Hitler’s sister, he was a student that would bring home bad grades and didn’t really care for the consequences he would face with his parents, and especially his dad.  Eventually in 1905, when Hitler’s mother was very sick, he moved to Vienna in pursue of his dreams. While his mother being sick due to breast cancer caused great devastation to him, this seemed like a great opportunity to follow his dreams and pursue a career in the arts.

Hitler’s goal was to get into Vienna Academy of Fine Arts and become successful in the city of Vienna, where many artists got their name, but when he was told that his work was not good enough for the school, this caused anger within him. Hitler has always been very confident in the things he did and not being able to get into his dream school really shocked him.  According to many sources, when he went back to get an explanation as to why he has not been accepted, he was told that his art lacked “human form” and that his artwork would be successful in an architecture school. While this doesn’t seem like a bad idea to people, to him it was horrendous because he had not finished high school and to get into the architecture schools, he would need a high school diploma.

While in Vienna, Hitler applied twice to the Vienna Academy of Fine Arts and got rejected twice. During his time there, many people believe that Hitler began to grow a hate towards Jews because Vienna, at the time, was populated with many Jews.  His anti-Semitic views might have stemmed from there, but there is no exact reason as to why. According to a source, one of Hitler’s childhood friends stated that even before Hitler left Austria to pursue his dreams, he was ant-Semitic, but like many other sources that explain when Hitler became this way, they fail to mention why.

While there might still be no exact reason as to how Hitler grew into his views, sources can introduce new ideas and theories as to how he thought. During the 1930s, Hitler was perceived as a very important figure for the Germans because he helped them bring the economy to a stable point since Germany lost World War I. According to charts, Germany’s unemployment rate in 1933 totaled 6 million people, but as Hitler took power in Germany, he was able to lower it to about 300 thousand people in 1939.

Hitler was a very smart man, like mentioned before, he was even put on Time Magazine as Man of the Year in 1938.  But when Hitler went into power in Germany, he already had anti-Semitic views in play because according to a book published in Germany, November 9: How World War One Led to the Holocaust by Joachim Riecker, it talks about Hitler believing that Jews did not care enough for Germany to win World War I at the time. Mr. Riecker goes on to describe how Hitler believed that the Jewish people in Germany ruined the government and its economy overtime, World War I was just a push towards finishing off the country. While this theory seems like a bit of stretch, it doesn’t seem entirely wrong as a reason to hate Jewish people, but Hitler was incorrect in saying that the Jews were the group of people that mainly were involved during the First World War.

According to a German census, the majority that lived in Germany around 1910, which were a few years before World War I, were either Catholics or Protestants. Most of Europe was mainly made up of these two religious groups, so to target Jews as the main participants of the First World War is incorrect. While there might have been Jews that participated in the war, not all of the Jews were to blame for, so this goes to show that this reason is not exactly a valid reason for Hitler to have anti-Semitic views.

Analyzing sources thus far, many of them mention many instances where Hitler has found an excuse to say he does not appreciate a Jew.

How Significant A Role Did Britain Play In The War Against Germany?

World War Two was the most devastating war in history. It was a battle of ideologies. Germany fought for control of Europe; The allies, Britain, America and Russia fought for freedom. The only way to crush an Ideology was total war, a devastating method of warfare killing an estimated 55 million civilians. The war ended the lives of 3% of the world population at the time. While all the allies suffered casualties, the Russians lost 29 million civilians on the eastern front. While Britain and America lost 870000 people combined, only 3% of the Russian deaths. With Russia taking Berlin, and Russia absorbing most of the deaths on the Eastern Front, was Britain significant in the defeat of Nazi Germany?

When war broke out, Germany swept through Europe during the Blitzkrieg, gaining military control of countries rapidly. The capture of France on June 14th, 1940 left Britain a sole island nation fighting against Nazi Germany. As an island, Britain relied on the sea for defence. German operation Sealion planned to land German forces to capture Britain; in order to safely transport troops, Germany needed to control the sea. At the same time, Britain was importing supplies across the Atlantic from America, which kept them alive through the war. The need for control on the sea was underpinned by looming threat from the Germans, and the necessity of trade between the Allies. Britain needed to import weapons and supplies from America, as the Germans attacked these trade routes the Battle of the Atlantic begun. The battle of the Atlantic was fought between 1939 until the end of the war in 1945. It was the longest battle in WW2, and victory would ensure the survival of Britain. Germany attempted to cripple the British navy through the use of undetectable U-boats, which sank thousands of Military and trade ships in an attempt to weaken the British navy and starve them to surrender. But for the British, the sea was too important to lose. At the beginning of the war, there were no reliable methods for avoiding U-Boats, so allied ships were at the mercy of luck, so much so that Winston Churchill said: “the only thing that really frightened me was the U-Boat peril”. But by 1941 the enigma was cracked, Britain now knew where U-Boats were headed and could steer convoys away from danger, saving 105 out of 174 convoys between may 1942 and may 1943. Furthermore, technological advancements led to the creation of depth charges which helped the British to combat the U-boats. As well as the production of the Hedgehog anti-submarine weapons destroyed many German naval resources. This kept trade between Britain and America going, ensure vital goods like food and munitions reached Britain keeping them alive. Britain’s contribution to the war at sea had considerable importance as it led to naval dominance in the Atlantic. If Germany had have controlled the Atlantic, the D-day invasion would have been nearly impossible to bring to fruition. Defeat in the Atlantic meant almost certain defeat for Britain and their resistance. And would have damaged Russia’s defence in the east, due the destruction of German U-Boats forced Hitler to draw more resources from the Eastern Front where Hitler desperately needed them.

Britain not only had to fight the German Navy, they had to compete with the German Air Force. With the invention of modern aircrafts, factories and towns could be destroyed by bombers. Germany planned to cripple the British air force, allowing them to destroy the ports in order to launch a full-scale invasion. To stop the Germans Britain had to control the air. The war began poorly for Britain. They were marred by the defeat at Dunkirk, the evacuation of 343,000 soldiers from the beaches of France. It was a complete military failure; the British lost 1954 of its artillery and 615 tanks, leaving them to be captured or destroyed by the Germans. Yet it was a Symbolic success for Britain, the boats of the British saved the soldiers and led to the British resilience that came to be known as ‘Dunkirk spirit’. This was integral in allowing the British to persevere through the Battle of Britain. The Battle of Britain signified the end of the phoney war, the period in which the British were at war with the Germans but did not fight. The Germans planned to invade Britain, and Hitler’s generals were worried about the damage that the Royal Air Force could inflict on the German Army during the invasion. Because of this Hitler agreed that the invasion should be postponed until the British Air Force had been destroyed. The German campaign objective became gaining air superiority over the RAF, especially Fighter Command. They began with the bombing of aircraft bases across Britain. This was less effective than the Germans had hoped; Britain had built up its air defences since 1936 under air chief marshal Sir Hugh Dowding. The widespread use of radar alerted the RAF of incoming Luftwaffe and allowed for a quick defence. Britain was outshooting and outproducing the Germans. Germany couldn’t destroy all the air force bases and In September 1940 Germany shifted their targets to bomb cities. This was terrifying for civilians, claiming over 32,000 lives and injuring over 80,000 more, But it gave the RAF the ability to rebuild their planes. They were able to put an end to the German air raids, and the Battle of Britain signified the first loss of the German army. The defence against the German Luftwaffe was integral to the survival of Britain, which in turn became a base of future attacks on Germany. Had Britain lost to the Germans, Britain would have fallen, and the base of D-day operations would be under Nazi control. The victory ensured that Germany have would fight a war on two fronts.

The success at the Battle of Britain also allowed Britain to launch aerial attacks, with the USA, against Germany. These attacks continued throughout the war. It was a controversial tactic. While British aerial attacks were not very effective, only 1 in 100 bombs landed five miles within its target, and the prediction that bombing cities would break the German morale was false. carpet bombing was extremely effective in large cities such as Hamburg, where thousands of deaths and the destruction of over 4000 factories occurred. The damage caused by these attacks crippled the German industrial might and forced resources and troops away from the Eastern Front, 2/3 German planes had to protect German cities. The bombings also destroyed German coastal defences and allowed for D-day plans to be made, opening a second front for the already stretched Germans. However, Britain was not alone. America produced the most machinery during the war, they produced 300,000 planes and supplied both Britain and Russia with planes to cover their losses in combat. As well as supply Britain money to build their own planes through lend-lease. They also took the brunt of the losses in the bombing campaign because they bombed during the day to ensure they struck their target. While this caused a better success rate of missions, it led to far more American deaths. The bombing campaign did not win the war, but it aided in the invasion of Germany.

If Germany had not been invaded, the war would have continued. To destroy the Nazi forces, Berlin would have to be captured. All three of the Allies would open fronts against the Germans in the East and West. With Russia suffering the most casualties at 29 million. On land, Britain made two major contributions in the war. The first British contribution on land was in the North African campaign against the Afrika Korps led by Rommel. Britain had lost much of its territory due to Rommel’s advance across North Africa in late 1942. But the British victory of the Battle of El Alamein in November 1942 was an important victory for the British campaign in Africa. It blocked Hitler’s access to the oil fields. The North Africa campaign was seen as insignificant to the Germans, but it led to the invasion of Southern Italy, and the fall of Italy as an axis power. It was a large blow to Germany, they stood against the combined forces of the Allied powers. However, Germany put little resources into the Africa campaign, with only four divisions under the control of Rommel.

The second contribution from Britain was D-day. Britain helped retake France from the German army. On 6 June 1944, Britain landed on the Beaches of Normandy, for the biggest land campaign of the western front. Britain was instrumental in the planning of D-day; they disrupted the German intelligence, making Hitler believe the invasion would begin at France’s Pas de Calais region 150 miles northeast of Normandy. Britain was the launch point of the invasion, and if Britain had fallen in the war D-day would be impossible. However, Britain was not alone. For the initial invasion, they only attacked two out of the five beaches and sent 14 divisions, compared to the USA’s 23 divisions.  And by the end of the war, the number of British soldiers decreased on the Western Front, whilst America’s grew to 60 divisions.

But the aim of D-Day was to create a second front to draw German troops away from the Eastern Front, the single largest battle in the war. It claimed the lives of 29 million Russians, both soldiers and civilians. Total War was never more evident in the East when invading Russia, Germany would kill soldiers who tried to surrender. Captured Russians were executed; German POW camps had policies for the deliberate mistreatment of Russians which led to 3.5 million deaths. This brutality caused the most devastating battles: the battle of Stalingrad, 400,000 people Russians died (more than the number of British casualties in the whole war); the battle of Kursk had 860,000 casualties; the Siege of Leningrad lasted 872 days, and resulted in the deaths of 1.5 million people. The number of deaths suffered by the Russian people shows the resilience they had during the war, and the determination of the Red Army to win at all costs. But after Russian victory at the Battle of Stalingrad, Russia began a counteroffensive. They began to push the Germans out of Russia. Once the Eastern Front had been moved past the Soviet border, the new goal was to reclaim the Baltics and communism in Eastern Europe. The Red Army pushed the Germans back. Slowly they weakened the German army, cut off its supply lines and drove it back to Berlin. The Russian counter-offensive was responsible for the death of 80% of the German army. It was agreed upon by the Allies that Russia would take Berlin, and obtain Germany’s surrender. Russia had won the war, while the British and Americans aided.

The War was a combined effort from the three allied powers. At the beginning of the war, Britain acted alone, with the fate of Europe resting on their survival. But they were only kept fighting by the funding of their war effort from America, $5.8 billion of goods were lent to Britain. The threat of defeat and a unified Nazi Europe was only quashed when Hitler turned his attention to Russia. That is where the war was decided. The majority of the casualties of the war took place on the eastern front, with the Russians losing more people in Stalingrad than the Americans and British lost in the whole war, the Red Army killed the most German soldiers and stormed Berlin. Without the manpower of the Russians, the war could not have been won.

Fate vs. free will in Frankenstein by Mary Shelley: writing essay help

Fate is the development of events beyond a person’s control, regarded as determined by a supernatural power, while Free Will is the power of acting without the constraint of necessity or fate; the ability to act at one’s own discretion. Throughout the novel Frankenstein by Mary Shelley, the question of fate vs. free will is brought to the reader’s attention. Victor Frankenstein and the Monster make many decisions throughout the novel. Each decision has an effect on different characters in the novel. The decisions that Victor and the Monster make in the novel cause the reader to think about whether these are of fate or free will.—tighten up

Throughout the novel, Victor Frankenstein speaks of fate and similar topics often. One of the first times we hear Victor speaking of fate is in Robert Walton’s fourth letter to his sister Mrs. Saville.”I thank you…for your sympathy, but it is useless; my fate is nearly fulfilled. I wait but for one event, and then I shall repose in peace. I understand your feeling…but you are mistaken, my friend, if thus you will allow me to name you; nothing can alter my destiny: listen to my history, and you will perceive how irrevocably it is determined.” In this quote, Victor is speaking to Captain Walton and is implying a future confrontation with the monster. Some readers think this implies the possibility of Victor killing his own creation, however towards the end of the novel, Victor dies on board the ship and moments later the monster is standing over his body. The monster then swears to burn himself committing suicide. By committing suicide, the monster suffers the same fate as Victor.

Although Victor and the Monster are different beings and do not share the same blood, they do share similar personalities and paths. They both like to gain knowledge of how the world works, for example when Frankenstein was interested in the mysteries of the natural world and the monster wanted to and did learn how to speak and read by learning from De Lacey, Felix, and Agatha teaching Safie. He also then starts to read and gain knowledge from the books he reads which include, Paradise Lost, Plutarch’s Lives, and the journals that he stole from Victor in his clothes. They also become more aware of their surrounds and adapt to them as they gain more knowledge. An example of this is Victor learning of electricity by watching a lightning storm, which he then later uses to bring the monster to life. An example of the monster learning and adapting is when he learns of fire. “One day, when I was oppressed by cold, I found a fire which had been left by some wandering beggars, and was overcome with delight at the warmth I experienced from it. In my joy I thrust my hand into the live embers, but quickly drew it out again with a cry of pain.” This quote is proof of the monster’s quick learning and adaptation. They are also both outcasted by society and although they don’t like it, they prefer to live away from society. Another similarity between the two is their hate for each other. Their mutual hatred started off when Victor saw the monster as ugly and worthless. Had he been a real father to the monster, he would have cared for him anyway. However due to the disapproval and abandoning of the monster by Victor, the monster grew a special hatred for his creator and father, Victor. All of these similarities are a way to show how although they take different versions, they are paths. They continually both suffer the same fates.

In the time period that the novel takes place, many people had major belief in religion and that God had chosen a path and fate for them. By creating the monster, it is almost as if Victor Frankenstein passes his fate and personality to the monster. They both continuously lose and kill loved ones throughout the novel. For example, the monster kills Victor’s nephew, William and therefore indirectly kills Justine by planting the photo on her. Later in the novel when Victor is working on his second creation and then foresees a future of the monsters reproducing and creating offsprings that are also monsters. Due to this “vision” he decides to destroy his second creation and what was supposed to be the monster’s companion. It just so happens that the monster was watching as through the window as he did this and swore to be with Victor on his wedding night. As promised there he was and he kills Victor’s new wife Elizabeth Lavenza. They both now are suffering the pain of losing their companions.

“I gazed on my victim, and my heart swelled with exultation and hellish triumph: clapping my hands, I exclaimed, ‘I, too, can create desolation; my enemy is not impregnable; this death will carry despair to him, and a thousand other miseries shall torment and destroy him.”

The monster is speaking of how he is not a victim of fate but rather a commander of fate. He is able to create desolation as that is what he feels. Due to the neglect by society and the lack of friendship or companionship, the monster feels as if his life is empty.

The whole novel can be seen as events that are supposed to happen because it was fate. After the many mentions of fate by Victor it is hard see the events and decisions as anything but those of fate and destiny. Due to the time that the novel was written in and the religious attitude in that era, it is very easy to see everything as destiny and fate. Many people believed in the theory of predestination. Predestination is the doctrine that God in consequence of his foreknowledge of all events infallibly guides those who are destined for salvation.

Lives in Germany – early-mid 1930s

In what ways do these primary sources contribute to your understanding of how economic conditions and the rise of the Nazis shaped people’s lives in Germany in the early-mid 1930s?

When Adolf Hitler was elected German Chancellor in January 1933, the economy was in turmoil. The Third Reich at this time underwent significant economic development after, like many other European countries, suffering after the Great Depression. However, by the outbreak of World War Two, the unemployment rate in Germany had tumbled: trade unions had been tamed, the work force had seemingly developed a positive work ethic and job prospects did improve. These primary sources contribute highly to any understanding of economic conditions in Germany and how the rise of the Nazis altered people’s lives in Germany at this time.

The first source is a photograph called ‘Unemployed Men Standing in Front of the Berlin Employment Office’ and was produced in June 1933, six months after Hitler became German Chancellor. It is by Hans Schaller, a popular German photographer. It was produced in order to convey the discontent and frustration experienced by unemployed people.

This source states, ‘In 1932, when the crisis reached its peak, about 6 million people were registered as unemployed in Germany’, conveying that with the turn of the rise of the Nazis before Hitler came to power, there was a significant unemployment epidemic. This number would have been higher as women were also unemployed, however, as their traditional roles in society were to be homemakers, they were not included in this statistic.

This can be corroborated by the Sopade Report by Otto Wels, the chairman of the Social Democratic Party of Germany from 1919 and a member of parliament from 1920 to 1933, therefore he would have been well-informed and experienced in the inner mechanics of the economy. The source articulates that ‘Hitler understood that a general economic upswing – and the drop in unemployment that would follow – was the best means for securing the loyalty of the German people’, highlighting Hitler’s understanding of the pressing issue of unemployment after becoming Chancellor and his willingness to tackle this for the German people.

‘Work and Bread’ was the name of a speech made by Gregor Strasser, a prominent German Nazi official and politician. It was produced few months prior to the July 1932 election, therefore, aiming to persuade the German electorate towards the mindset of the Nazi Party. The general message of this source highlighted the current state of the government was not successful, and national socialism was the most suitable route towards political stability. He asserts, ‘Article 163 will have to one day be altered to the effect that every German must have the right to work and people will have to be aware of the full significance of this alteration.’ Article 163 of the Weimar Constitution stated that ‘Every German should be given the possibility of earning his living through work’, therefore, emphasising to the German people that having stable employment is key to success and happiness.

This can be furthered by the explanation of the photograph by Hans Schaller which articulates that ‘The persistent worldwide depression and the mass unemployment associated with it were among the main catalysts for the general radicalisation of the political climate in Germany’. The impact of unemployment levels nationwide resulted in the public wishing for a new and distinct political sphere, which arguably led to the rise of the political extremism of the Nazi Party, thus, significantly shaping the lives of the German people at this time.

Furthermore, employment conditions for workers in Germany were arguably poor. An interview with Sally Tuchklaper, who was Polish and working in German factories throughout the war, thus being a first-hand oral account of events in employment in Germany. It was conducted by Anita Schwartz, for the purposes of fellow survivors and the academic circle, however, gradually generated a wider audience.

She said the working conditions, ‘weren’t bad but we were still under pressure. We couldn’t do nothing; we had to go on their rules which – them and we came in the morning at nine o’clock and we worked the whole day’, which affirms the nature of the heavy workload that young girls had to face at this time.

Oral history can be defined as the recording, preservation and interpretation of historical knowledge, based on the personal encounters and opinions of the speaker. This is a very subjective and personal form of evidence and can give a voice to groups who are sometimes marginalized in ‘conventional’ stereotypes, such as the working classes and women. It can provide new information, alternative explanations and varied insights which are highly valuable. The spoken word can convey emotions with immediacy and an impact that the written documents cannot match and allows the historian to ask questions of his or her informant – to be present at the creation of a historical source, rather than relying on those created by others.

On the other hand, oral history can be classed as inaccurate in other areas. It can be contended that someone’s memory may be selective or distorted over time, and so, the quality of these sources may be questioned. Additionally, the interviewer’s questions may intentionally or unintentionally influence the informant’s response.

Recognizing Neighborhood Satisfaction; Significant Dimensions And Assessment Factors

Abstract

This study looks at the relation of attributes of the neighborhood and satisfaction with them to evaluate the overall neighborhood satisfaction. The concept of neighborhood has been severely blurred if not lost as a result of the development practices of the last several decades. So research must first come to a conclusion regarding how to define a neighborhood. Then it will reveal the concept of satisfaction and the term in neighborhood scale. Since Neighborhood satisfaction refers to residents’ overall evaluation of their neighborhood and dimensions of satisfaction consist different issues which refer to the aspects, characteristics, and features of the residential environment, so several factors that influence the neighborhood satisfaction will be introduced in various categories as the result of the essay.

Keywords: neighborhood, satisfaction, neighborhood satisfaction, dimensions of neighborhood satisfaction

Introduction

Neighborhoods are the localities in which people live and are an appropriate scale of analyzing local ways of living. They can have an enormous influence on our health, wellbeing, and quality of life (Hancock 1997; Barton 2000; Srinivasan, O’Fallon, and Dearry 2003; Barton, Grant and Guise 2003).

The urban neighborhoods were once thriving communities with a variety of residents. Although racial segregation was prevalent in the majority of neighborhoods, many communities offered economic diversity (Bright, 2000). In the industrial era, they can be characterized as early establishments of quaint villages or in some instances attractive old suburbs of the cities. As cities grew and annexed these communities to the cities, they continued to thrive as a homogeneous part of the city, resulting in a habitat of diverse choices and opportunities. However, as the economy changed they experienced decline and reduced attention. The phenomenon called suburbanization, and later ”edge cities”, made center cities less attractive, at least for living in the urban neighborhoods. As there were policies that created this situation, there were efforts to keep the interest in neighborhoods too. Despite the efforts for revitalization though, the neighborhoods continue to be in distress. The process of continued decline points out the deficiencies in the approaches and programs (Vyankatesh, 2004: 22-23).

 Islamic Azad University, Salmas Branch

A good neighborhood is described as a healthy, quiet, widely accessible and safe community for its residents. Neighborhood satisfaction refers to residents’ overall evaluation of their neighborhood. Researchers from many disciplines have examined neighborhood satisfaction. A neighborhood is thus more than just a physical unit. One chooses to live in a housing unit after careful considerations of the many factors, which comprise the surrounding environment. Desirability of the neighborhood is decided on the factors such as location – from jobs, shopping, recreation, accessibility, vailability of transportation, ”quality of life”.

We aim to discover the factors that influence residents’ satisfaction with their neighborhoods. The basic question is as follows:

What neighborhood elements influence satisfaction and how do they do so in general?

Literature Review

Literature on Neighborhoods

Neighborhood Settings

Ebenezer Howard (1898) based his design of the Garden City on the neighborhood units, which were relatively self-sufficient units that merged. While Howard’s idea focused in the suburbs, Clarence Perry (1929) attempted it in the city. His neighborhood unit was a self-contained residential area bounded by major streets, with shopping districts in periphery and community center and an elementary school located at the center of the neighborhood unit. In 1966, Clarence Stein altered Perry’s ideal concept in Radburn’s design. It had an elementary school at the center and park spaces flowed through the neighborhood, but it was larger than Perry’s concept and introduced the residential street design with cul-de-sac to eliminate through traffic.

At World War II, the massive suburbs developed and the concepts of neighborhood as a basic unit in land development changed. Since 2000, New Urbanists have called for traditional neighborhood development (TND) and transit-oriented development (TOD) models. They propose a neighborhood unit with a center and a balanced mix of activities; and they gave priority to the creation of public space.

Defining Neighborhood

The literature on neighborhoods defines neighborhood in many ways. While there is little broad agreement on the concept of neighborhood, few geographers would contradict the idea that neighborhood is a function of the inter-relationships between people and the physical and social environments” Knox & Pinch, 2000, p. 8). Brower (1996) explains that its form is derived from a particular pattern of activities, the presence of a common visual motif, an area with continuous boundaries or a network of often traveled streets. Soja (1980, p. 211) coined the term sociospatial dialectic for this phenomenon where “people create and modify urban spaces while at the same time being conditioned in various ways by the spaces in which they live and work.” It seems that research uses multiple definitions of a neighborhood simultaneously to reflect the fact that neighborhood is not a static concept but rather a dynamic one (Talen & Shah, 2007)

Park states that “Proximity and neighborly contact are the basis for the simplest and most elementary form of association which we have in the organization of city life. Local interests and associations breed local sentiment, and, under a system which makes residence the basis for participation in the government, the neighborhood becomes the basis of political control … it is the smallest local unit … The neighborhood exists without formal organization” (Park, 1925, p. 7).

Keller emphasizes on Boundaries, social character, unity or belonging, local facility use. He declines “The term neighbourhood … refers to distinctive areas into which larger spatial units may be subdivided such as gold coast and slums … middle class and working class areas. The distinctiveness of these areas stems from different sources whose independent contributions are difficult to assess: geographical boundaries, ethnic or cultural characteristics of the inhabitants, psychological unity among people who feel that they belong together, or concentrated use of an area’s facilities for shopping, leisure and learning… Neighborhoods containing all four elements are very rare in modern cities … geographical and personal boundaries do not always coincide” (Keller, 1968, p. 87).

While Wilkenson’s definition of neighborhood is based on its Place-orientated process, partial social relations, shared interest characteristics as he states “Community is not a place, but it is a place-orientated process. It is not the sum of social relationships in a population but it contributes to the wholeness of local social life. A community is a process of interrelated actions through which residents express their shared interest in the local society” (Wilkenson, 1989, p. 339), Kitagawa and Taeubeur emphasize on area history, name, local awareness, local organizations, and local business issues of the neighborhoods. They argue that “When community area boundaries were delimited… the objective was to define a set of sub-areas of the city each of which could be regarded as having a history of its own as a community, a name, an awareness on the part of its inhabitants of community interests, and a set of local businesses and organizations orientated to the local community” (Kitagawa and Taeubeur, 1963, p. xiii).

Glass believes physical and social characters both, take place in a Territorial group which he defines as neighborhood: “A neighbourhood is a distinct territorial group, distinct by virtue of the specific physical characteristics of the area and the specific social characteristics of the inhabitants” (Glass, 1948, p. 18).

Research commissions other than authors have their own neighborhood definition. As a result US National Research Commission on Neighborhoods and US National Research Council has revealed definitions as follows:

 “A community consists of a population carrying on a collective life through a set of institutional arrangements. Common interests and norms of conduct are implied in this definition” (US National Research Commission on Neighborhoods, 1975, p. 2).

 “In last analysis each neighborhood is what the inhabitants think it is. The only genuinely accurate delimitation of neighborhood is done by people who live there, work there, retire there, and take pride in themselves as well as their community” (US National Research Council, 1975, p. 2).

Forrest and Kearns (2004, p. 2126) argue the concept of neighborhood in an increasingly globalizing society and state impact of the information/technological age on neighborhood: “new virtuality in social networks and a greater fluidity and superficiality in social contact are further eroding the residual bonds of spatial proximity and kinship.”

Different definitions serve different interests, so that the neighborhood may be seen as a source of place-identity, an element of urban form, or a unit of decision making. This codependence between the spatial and social aspects of neighborhood is arguably one of the main reasons why the concept is so difficult to define.

Categorizing Neighborhood

Blowers conceptualizes neighborhood not as a static spatial entity but as existing along a continuum yielding five neighborhood types (Figure 1). Proceeding left to right in the continuum additional characteristics or dimensions are cumulatively added yielding more complex neighborhoods:

Figure 1 – The Neighborhood Continuum (Blowers 1973)

1. Arbitrary neighborhood: Blowers describes these neighborhoods as having “no integrating feature other than the space they occupy.” These districts have few homogeneous qualities and exhibit low social interaction (Blowers, 1973: p.55).

2. physical neighborhood: The boundaries of physical neighborhoods despite the arbitrary’s ill – defined boundaries are delineated by natural or built barriers such as major roads, railways, waterways or large tracts of non-residential land use (e.g. industrial parks, airports, etc.) The inhabitants residing within the boundaries of a physical neighborhood may share few characteristics in common. Blowers’ cautions that occupying the same physical area does not automatically imply a high degree of social interaction (Butler, 2008: 8).

3. Homogeneous neighborhood: These neighborhoods are the most familiar type of neighborhood in Blowers typology which has distinct spatial boundaries and the residents share common demographic, social or class characteristics.

4. Functional neighborhood: Blowers describes these neighborhoods as “functional areas are those within which activities such as shopping, education, worship, leisure, and recreation take place.” Like any functional region in geography, they are organized around a central node with surrounding linked to it through activities, service interchanges and associations (Blowers, 1973, p. 59).

5. Community neighborhood: Blowers sees the community neighborhood as “close-knit, socially homogeneous, territorially defined group engaging in primary contacts.” (Blowers, 1972, p. 60). Chaskin defines neighborhood as “clearly a spatial construction denoting a geographical unit in which residents share proximity and the circumstances that come with it… communities are units in which some set of connections is concentrated, either social connections (as in kin, friend or acquaintance networks), functional connections (as in the production, consumption, and transfer of goods and services), cultural connections (as in religion, tradition, or ethnic identity), or circumstantial connections (as in economic status or lifestyle)” (Chaskin, 1997, p. 522). Blower (1972, p, 61) contends that the community neighborhood can be seen as a culmination of the characteristics of the

environment, the socio-economic homogeneity of the population, and the functional interaction that takes place will contribute to the cohesiveness of the community neighborhood.” (Blower, 1972, p, 61)

Some researches demonstrate other classifications of neighborhoods. For instance Ladd, 1970; Lansing & Marans, 1969; Lansing et al., 1970; Marans, 1976; Zehner, 1971 introduce micro and macro neighborhoods based on walkability. They agree that a neighborhood should comprise a for the New preceding neighborhood types on the continuum by stating that “the distinctiveness of the geographical walkable distance . However, the actual walkable distance considered has varied from a quarter-mile to one mile from center to edge (Calthorpe, 1993; Choi et al., 1994; Colabianchi et al., 2007; Congress Urbanism, 2000; Hoehner et al., 2005; Hur & Chin, 1996; Jago, Baranowski, Zakeri, & Harris, 2005; Lund, 2003; Perry, 1939; Pikora et al., 2002; Stein, 1966; Talen & Shah, 2007; Western Australian Planning Commission, 2000). Micro-neighborhood is an area that a resident could see from his/her front door, that is, the five or six homes nearest to their house. Similarly, Appleyard (1981) used the term, home territory. He looked at residents’ conception of personal territory in three streets with different traffic hazard. The results showed that residents drew their territorial boundaries to a maximum of a street block (between intersections with approximately 6-10 buildings each side), and to a minimum their own apartment building. Research showed that the micro-neighborhood deals more with social relationships among neighbors than the physical environment.

In a slight adaptation of Suttles’ (1972) schema, we might say that the neighbourhood exists at three different scales (Table 1):

Table 1. Scales of Neighborhood

Scale Predominant function Mechanism(s)

Home area Psycho-social benefits

(for example, identity; belonging)

Familiarity

Community

Locality Residential activities

Social status and position Planning

Service provision

Housing market

Urban district or region Landscape of social and

economic opportunities

Employment connections

Leisure interests

Social networks

The smallest unit of neighbourhood, here referred to as the ‘home area’, is typically defined as an area of 5–10 minutes walk from one’s home. Here, we would expect the psycho-social purposes of neighbourhood to be strongest. As shown elsewhere (Kearns et al., 2000), the neighbourhood, in terms of the quality of environment and perceptions of co-residents, is an important element in the derivation of psycho-social benefits from the home. In terms of Brower’s (1996) outline of the ‘good neighbourhood’ , the home area can serve several functions, most notably those of relaxation and re-creation of self; making connections with others; fostering attachment and belonging; and demonstrating or reflecting one’s own values.

Some neighbourhoods and localities (in addition to individuals and groups) can be seen to be subject to discrimination and social exclusion as places and communities (Madanipour et al., 1998; Turok et al., 1999).

Once the urban region (the third level of neighbourhood in Table 1) is viewed as a landscape of social and economic opportunities with which some people are better engaged than others (for example, by reasons of employment, leisure activities or family connections), then the individual’ s expectations of the home area can be better understood (Kearns & Parkinson, 2001: 2104-2105).

Not only researchers have described several categories for neighborhoods but also different stratifications of neighborhood consumers have been developed. Four distinct types of user potentially reap benefits from the consumption of neighbourhood: households, businesses, property owners and local government. Households consume neighbourhood through the act of occupying a residential unit and using the surrounding private and public spaces, thereby gaining some degree of satisfaction or quality of residential life. Businesses consume neighbourhood through the act of occupying a non-residential structure (store, office, factory), thereby gaining a certain flow of net revenues or profits associated with that venue. Property owners consume neighbourhood by extracting rents and/or capital gains from the land and buildings owned in that location. Local governments consume neighbourhood by extracting tax revenues, typically from owners based on the assessed values of residential and non-residential properties (Galster, 2001: 2113)

Literature on satisfaction

Mesch and Manor (1998) define satisfaction as the evaluation of features of the physical and social environment.

Canter and Rees have argued that people interact with the environment at different levels— from the bedroom to the neighborhood and to the entire city. In their model of housing satisfaction, Canter and Rees (1982) referred to these levels of environment as levels of environmental interaction and defined them as scales of the environment that have a hierarchical order. They specified different levels at which people may experience satisfaction such as the house and the neighborhood. They also argued that the experience of satisfaction is similar and yet distinct at different levels of the environment. Similarly, Oseland (1990) and Gifford (1997, p. 200) stressed that other responses such as the experience of space and privacy also vary in different rooms in a home. Oseland’s study supported the hypothesis that users’ conceptualization of space depends on the location of the space. Some models of residential satisfaction (Weidemann & Anderson, 1985; and Francescato, Wiedemann, & Anderson, 1989) have also suggested that it is important to consider different levels of environment in the study of satisfaction.

Some studies, however, have examined how residential satisfaction varies at different levels of the environment (Paris & Kangari, 2005; Mccrea, Stimson, & Western, 2005). Most of these studies have examined residential satisfaction at two or three levels, namely the housing unit and the neighborhood level. For example, Mccrea et al., (2005) examined residential satisfaction at three levels; the housing unit, the neighborhood, and the wider metropolitan region. Although the manner in which levels of environment have been defined in these studies has depended on the context of the research and on the interest of the researcher, the most common levels of environment have been the housing unit and the neighborhood (Amole, 2009:867).

Discussion

Neighborhood satisfaction

What is a good neighborhood? A common answer could describe it as a healthy, quiet, widely accessible and safe community for its residents, wherever they may live, in the suburbs or in the city. However Brower believes a good neighborhood is not an ideal neighborhood, but it is a place with minimum problems and defects (Brower, 1996). Practically, a neighborhood is defined by the psychology of its 4 types of consumers which includes households, businesses, property owners and local government as described above. The boundaries drawn are often based on these and other factors such as history, politics, geography and economics.

Whether there is a relative homogeneity in socioeconomic character, historic conditions such as annexations, political boundaries of wards and councils, or whether the place is divided by natural geographic features or by rails, streets etc all counts in deciding the ‘goodness’ of the neighborhood (Vyankatesh, 2004:20).

Neighborhood satisfaction refers to residents’ overall evaluation of their neighborhood. Researchers from many disciplines have examined neighborhood satisfaction (Amerigo, 2002; Amerigo & Aragones, 1997; Carvalho et al., 1997; Francescato, 2002; Hur & Morrow-Jones, 2008; Lipsetz, 2001; Marans, 1976; Marans & Rodgers, 1975; Mesch & Manor, 1998; Weidemann & Anderson, 1985). They have used a variety of terms such as, residential satisfaction, community satisfaction, or satisfaction with residential communities for it (Amerigo & Aragones, 1997; Cook, 1988; Lee, 2002; Lee et al., 2008; Marans & Rodgers, 1975; Miller et al., 1980; Zehner, 1971). (Hur, 2008a: 8)

High neighborhood satisfaction increases households’ sense of community and vice versa (Brower, 2003; Mesch & Manor, 1998). Studies often mention that residential and neighborhood satisfaction also influences people’s intentions to move (Brower, 2003; Droettboom, McAllister, Kaiser, & Butler, 1971; Kasl & Harburg, 1972; Lee, Oropesa, & Kanan, 1994; Nathanson, Newman, Moen, & Hiltabiddle, 1976; Newman & Duncan, 1979; Quigley & Weinberg, 1977). High satisfaction among residents encourages them to stay on and induces others to move in, and low satisfaction with the neighborhood environment urges current residents to move out. Marans and Rodgers (1975) and Marans and Spreckelmeyer (1981) find that the relationship between neighborhood satisfaction, decisions to move, and quality of life is a sequential process, with neighborhood satisfaction predicting mobility and mobility affecting quality of life (Hur, 2008b: 620).

Francescato et al. (1989) noted that “the construct of residential satisfaction can be conceived as a complex, multidimensional, global appraisal combining cognitive, affective, and cognitive facets, thus fulfilling the criteria for defining it as an attitude. (p.189)”

Dimensions of Neighborhood Satisfaction

Dimensions of satisfaction are similar at the different levels of the environment. The term “dimensions of satisfaction” refers to the aspects, characteristics, and features of the residential environment (such as design aspects, social characteristics, facilities provided, or management issues) to which the users respond in relation to satisfaction (Francescato, 2002). This is important because it would inform researchers about the important dimensions and relevant research questions at different levels of the environment.

A neighborhood is thus more than just a physical unit. One chooses to live in a housing unit after careful considerations of the many factors, which comprise the surrounding environment. Desirability of the neighborhood is decided on the factors such as location – from jobs, shopping, recreation, accessibility, vailability of transportation, ”quality of life”, however ambiguous that term may be, depicted in countless expressions or terms of public and private services, sewer, water, police schools, neighbors, entertainment facilities etc., (Ahlbrandt and Brophy, 1975). Availability of housing of a desirable choice is yet another factor influencing the choice of neighborhood. The desired lot sizes and architectural styles play their role in the choice. These livability features hold a key to the future viability of a neighborhood (Vyankatesh, 2004: 22).

Residents in neighborhoods where most homeowners are satisfied with their neighborhoods focus on different aspects of their neighborhoods than those where most are dissatisfied; and finally, we hypothesize that the two neighborhood groups differ in terms of features that affect neighborhood satisfaction.

The findings of neighborhood satisfaction research are sometimes contradictory because of the compound nature of “satisfaction.”

Since neighborhood characteristics vary; there are spatial differences in satisfaction across areas. Also length of residence, amount of social interaction, satisfaction with traffic, and satisfaction with appearance or aesthetics are important variables in neighborhood satisfaction. Thus complex characteristics of neighborhood satisfaction have been pointed in our research:

 Where Residents Live

Research found different circumstances affecting neighborhood satisfaction depending on where the residents live (Cook, 1988; Hur & Morrow-Jones, 2008; Zehner, 1971). For example, Zehner (1971) examined residents’ neighborhood satisfaction in new towns and less planned areas. New town residents were more likely to mention attributes of the larger area, the physical factors; and the less planned town residents were focused on the micro-residential features with emphases on the social characteristic of the neighborhood (Hur, 2008a: 17).

Socio-Demographic Characteristics

There were a number of studies that indicate the importance of sociodemographic characteristics on neighborhood satisfaction. They have found positive influences of longer tenure in the neighborhood (Bardo, 1984; Galster, 1987; Lipsetz, 2001; Potter & Cantarero, 2006; Speare, 1974), and homeownership (Lipsetz, 2001). Young, educated, and wealthy urban residents were found to be more satisfied than others (Miller et al., 1980). St. John (1984a, 1984b, 1987) found no evidence of racial differences in neighborhood evaluation, but Morrow-Jones, Wenning, and Li (2005) found that Satisfaction with a community’s racial homogeneity is another predictor of residential satisfaction.

 Social Factors in Neighborhood

Social and psychological ties to a place such as having friends or family living nearby (Brower, 2003; Lipsetz, 2001; Speare, 1974) were an important social factor in neighborhood satisfaction. Brower (2003) finds having friends and relatives living nearby is a factor that increases neighborhood satisfaction; Lipsetz (2000), on the other hand, finds that it has a largely negative effect on urbanites’ satisfaction but has no effect on that of suburbanites’.

The findings agree that residents were satisfied when they considered their neighbors as friendly, trusting, and supportive. People reported satisfaction was higher when they reported talking to their neighbors often and supporting each other formally and informally, especially for the residents who have lived in the neighborhood longer (Potter & Cantarero, 2006).

In addition to the positive social interactions factors, the other factors that decrease neighborhood satisfaction include crime rate, social incivilities such as harassing neighbors, teenagers hanging out, noise, fighting, and arguing.

 Physical Factors in Neighborhood

I. Physical environmental characteristics

Planners can more directly shape the neighborhood physical features and policy can apply the physical features effectively. Although planners support the importance of physical characteristics, residents consider social factors more important in judging a neighborhood (Lansing & Marans, 1969).

Research often finds physical characteristics a strong influence on neighborhood satisfaction compared to social or economic characteristics (Sirgy & Cornwell, 2002). Neotraditonal and New Urbanist approaches focus on physical features as a media to decrease dependence on the automobile, foster pedestrian activity, and provide opportunities for interaction among residents (Marans & Rodgers, 1975; Rapoport, 1987).

There are several physical environmental features that research has considered. Some relate to neighborhood satisfaction and the others have connections to the factors that may link to neighborhood satisfaction. Hur (2008a) has categorized Physical environmental characteristics to 3 types:

1. Physical disorder (incivilities):

It promotes fear of crime, makes people want to leave the area, and diminishes residents’ overall neighborhood satisfaction. Physical incivilities can be grouped into three kinds:

• Fixed feature elements (such as, a vacant house and dilapidated building): Fixed feature elements “change rarely and slowly” (p. 88). Individual housing and the building lot are fixed-feature elements of the neighborhood.

• Semi-fixed feature elements (such as, graffiti and broken feature on public property): semi-fixed feature elements “can, and do, change fairly quickly and easily” (p. 89), which Rapoport says, “become particularly important in environmental meaning…where they tend to communicate more than fixed-feature elements” (p. 89).

• Non-fixed (movable) elements (such as, litter and abandoned cars): Rapoport (1982) also suggested non-fixed feature elements, which include people and their nonverbal behaviors (p. 96).

2. Defensible space features :

“Defensible Space” is a program that “restructures the physical layout of communities to allow residents to control the areas around their homes (U.S. Department of Housing and Urban Development, 1996, p.9).” This supports an action to foster territoriality, natural surveillance, a safe image, and a protected milieu:

• Foster territoriality: Territoriality, another defensible feature, involves territorial symbols such as yard barriers (G. Brown et al., 2004; Perkins et al., 1993), block watch signs, security alarm stickers, and evidence of dogs (Perkins et al., 1993). Although they may reduce crime and fear of crime, research has not looked at the connection to residents’ neighborhood satisfaction. Litter and graffiti, which are also incivilities, affect image and milieu.

• Natural surveillance: Natural surveillance involves windows facing the streets, and place to sit outside (front porches). If provide eyes on the street (B. Brown et al., 1998; MacDonald & Gifford, 1989; Perkins et al., 1992, 1993), give residents opportunities to have informal contacts with their neighbors to help formation of local ties (Bothwell, 1998; B. Brown et al., 1998; Plas & Lewis, 1996), and affects non-verbal messages of monitoring (Easterling, 1991; Taylor & Brower, 1985). Research reported that a less visible street from neighboring houses had more crime (G. Brown et al., 2004; Perkins et al., 1993) indicating the importance of surveillance system in neighborhood. Despite of its significance, Bothwell et al. (1998) was the only study looking at natural surveillance as an influence on neighborhood satisfaction. The study showed how public housing residents in Diggs town have become known to each other, restored the sense of belongingness, and built strong neighborhood satisfaction via front porches.

• A safe image: The safe image conveys an impression of a safe and invulnerable neighborhood. If the image is negative, “the project will be stigmatized and its residents castigated and victimized (Newman, 1972, p. 102).”

• A protected milieu: A safe milieu is a neighborhood situated in the middle of a wider crime-free area, which is thus insulated from the outside world by a moat of safety (Burke, 2005, p.202).

3. Built or natural characteristics:

The third type of physical environmental features includes the degree to which a place looks built or natural. Studies have measured residential density, land use and vegetation. Lansing et al. (1970) was the only study to look at density-related characteristics (e.g., frequency of hearing neighbors, and privacy in yard from neighbors) on neighborhood satisfaction. But those elements were more social than physical, and thus may only get at physical density in the neighborhood indirectly. Lee et al. (2008) found that residents’ neighborhood satisfaction was associated with natural landscape structure: tree patches in the neighborhood environment that were less fragmented, less isolated, and well connected positively influenced the neighborhood satisfaction. Some research has looked at the associations between multiple attributes. Ellis et al. (2006) looked at relationships between land use, vegetation, and neighborhood satisfaction. While the amount of nearby retail land use has a negative correlation with neighborhood satisfaction, they found that the amount of trees moderated the negative effect (Hur, 2008a: 19-22)

II. Perceived and evaluative physical environmental characteristics

One set of studies identifies physical appearance as the most important factor for increasing neighborhood satisfaction and quality of life (Kaplan, 1985; Langdon, 1988, 1997; Sirgy & Cornwell, 2002). Nasar’s (1988) survey of residents and visitors found that their visual preferences related to five likable features: naturalness, upkeep/civilities, openness, historic significance, and order. People liked the visual quality of areas that had those attributes and they disliked the visual quality of areas that did not have them. Newly arrived residents point out that physical appearance is the most important factor for residential satisfaction, but long-time residents mention stress factors (e.g., tension with neighbors, level of income of the neighborhood, inability to communicate with others, racial discrimination, crime, etc.) as the most important factors (Potter & Cantarero, 2006).

 Emotional and temporal dimensions of the environmental experience

These are recognized as a component part of the people–environment relationship and therefore residential satisfaction. Residential satisfaction is indeed strongly associated with one’s attachment to the living space.

Conclusion

Several studies have constructed comprehensive models of residential satisfaction. The complex attributes of neighborhoods can be categorized to 7 types, each have several characteristics. These are the main features which should be studied, measured and rated to estimate the residential satisfaction of their neighborhoods.

We must point that each group of neighborhood satisfaction dimension has to deliberate separately by the 4 types of the consumers of the neighborhood which have been mentioned before. The total rank will demonstrate the neighborhood satisfaction status.

As the result of the essay we introduce a classification of the satisfaction dimensions. This can make a comprehensive base for evaluating almost all the features that influence the residential satisfaction in neighborhood scale.

The 7 types of the neighborhood attributes and satisfaction dimensions are presented in the table below (table 2):

Table 2. Complex Attributes of Neighborhoods

Satisfaction dimension Assessment factors Sub-factors

Spatial characteristics Proximity characteristics access to major destinations of employment

both distance and transport infrastructure

Local facility use Local interests

Open spaces

Access to recreational opportunities

Entertainment, shopping, etc

Mass – void –

Neighborhood boundaries

Unity

Pedestrian access to stores

Place – oriented design process

Legibility

Physical characteristics Structural characteristics of the residential and non-residential buildings Type

scale

materials,

design, state of repair

density, landscaping, etc.

Infrastructural characteristics roads

sidewalks

streetscaping

utility services, etc.

traffic –

Aesthetics / appearance Naturalness

Upkeep / civilities

Openness

Historic significance

Order

color

Density of housing –

Building type Apartment

Villa, etc.

Environmental characteristics degree of land

topographical features

views, etc. –

pollution

air,

water

noise

cleanliness –

Climatic design Architecture

Wind tunnels

Sunny / too hot

Sentimental characteristics Place identification

Historical significance of buildings or district, etc.

length of residence

proximity to problem areas

name/ area pride

local awareness

Living space

New towns

Less planned areas

cognition

Place identity

Sense of place

Sense of Belonging to place

Social characteristics Local friend and kin networks

Degree of interhousehold familiarity

Type and quality of interpersonal associations

Residents’ perceived commonality Participation in locally based voluntary associations

Strength of socialization and social Control forces

Social support

Racial homogeneity

Neighborhood cohesion

Collective life

Interaction with communities

Interaction through favors

Interaction through social activity

Amount of social interaction

Territorial group

Common interests

Participation: informal social participation & participation in formal neighborhood organizations

Common conduct

Physical disorder (incivilities) Fixed feature elements (such as, a vacant house and dilapidated building)

Semi-fixed feature elements (such as, graffiti and broken feature on public property)

Non-fixed (movable) elements

Defensible space features Foster territoriality (such as block watch signs, security alarm stickers, and evidence of dogs)

Natural surveillance (such as windows facing the streets, and place to sit outside)

Built or natural characteristics Residential density

Land use

Vegetation

Demographic- economic characteristics Age distribution,

Family composition,

Ethnic

Religious Types

Tenure period / Home ownership

Wealthy / Poor

Ratio of owners / renters

Gender

Marital status

Cultural characteristics

Age

Young

Old

Children under 18

etc.

Education

educated

Uneducated

education composition

Occupation

Local business workers / retried

Income

Family / friend nearby

Friendly / trusting and supportive neighbors

Crime rate

Teenagers hanging out

Noise

Fighting / arguing –

Management – political characteristics The quality of safety forces

Public schools

Public administration

Parks and recreation, etc.

Residents exert influence in local affairs through spatially rooted Channels or elected representatives

Local government service

Local associations

Political control

Local organizations –

References

Amole, Dolapo, 2009, Residential Satisfaction and Levels of Environment in Students’ Residences, Environment and Behavior, Volume 41, No. 6, P 867.
Barton, Hugh, 2000, Sustainable Communities: The Potential for Eco; neighborhoods- Earthscan Publications Ltd.
Blowers, A. (1973). The neighbourhood: exploration of a concept, Open Univ. Urban Dev. Unit 7, Pp 49-90.
Bright, Elise M., 2000, Reviving America’s Forgotten Neighborhoods: An Investigation of Inner City Revitalization Efforts. New York: Garland Publishing, Inc.
Brower, Sidney. (1996). Good neighborhoods: Study of in-town and suburban residential environments. Westport, CT: Praeger Publishers.
Butler, Kevin A., 2008, A Covariance Strucyural Analysis Of A Conceptual Nehghborhood Model, A dissertation for the degree of Doctor of Philosophy submitted to Kent State University, P 8.
Canter, D, & Rees, K.A. (1982). Multivariate model of housing satisfaction. International Review of Applied Psychology, 32, Pp 185-208.
Chaskin, Robert J., 1997, Perspectives on Neighborhood and Community: A Review of the Literature, The Social Service
Review, Vol. 71, No. 4, pp. 521-547. p. 522.
Churchman, A. (1999, May). Disentangling the concept of density. Journal of Planning Literature, 13(4), Pp 389-411.
Ellis, C. D., Lee, S. W., & Kweon, B. S. (2006). Retail land use, neighborhood satisfaction and the urban forest: An investigation into the moderating and mediating effects of trees and shrubs. Landscape and Urban Planning, 74, Pp 70-78.
Fleury-Bahi, Ghozlane & Félonneau, Line, 2008, Processes of Place Identification and Residential Satisfaction, Environment and Behavior, Volume 40, No.5, pp 669-682.
Forrest, R., & Kearns, A. (2004). Who Cares About Neighbourhood? Paper presented at the Community, Neighbourhood, Responsibility. From http://www.neighbourhoodcentre.org.uk.
Forrest, Ray & Kearns, Ade, 2001, Social Cohesion, Social Capital and the Neighbourhood, Urban Studies, Vol. 38, No. 12, pp.2125–2143.
Galster, George, 2001, On the Nature of Neighbourhood, Urban Studies, Vol. 38, No. 12, Pp 2113.
Gifford, R. (1997). Environmental psychology: Principles and practices. Boston: Allyn and Bacon, p. 200
Hur, Misun & Morrow-Jones, Hazel, 2008, Factors That Influence Residents’ Satisfaction with Neighborhoods, Environment and Behavior, Volume 40, No. 5, Pp 620.
Hur, Misun, 2008a, Neighborhood satisfaction, physical and perceived characteristics, A dissertation for the degree of Doctor of Philosophy submitted to Ohio state university, Pp 8- 17 – 20,21,22,19.
Johnson , Philip, 2008, Comparative Analysis of Open-Air and Traditional Neighborhood Commercial Centers, A dissertation for the degree of MASTER OF Master of Community Planning submitted to the University of Cincinnati.
Kaplan, R. (1985). Nature at the doorstep: Residential satisfaction and the nearby environment. Journal of Architectural and Planning Research, 2, Pp 115-127.
Kearns, Ade & Parkinson, Michael, 2001, The Significance of Neighbourhood, Urban Studies, Vol. 38, No. 12, pp. 2103–2110.
Keller, Suzanne, 1968, The Urban Neighborhood: A Sociological Perspective, Random House, p. 87
Ladd, F. C. (1970). Black youths view their environment: Neighborhood maps. Environment and Behavior, 2, Pp 74-99.
Lansing, J. B., Marans, R. W., & Zehner, R. B. (1970). Planned residential environments. Ann Arbor, Michigan: Institute for Social Research, The University of Michigan.
Lee, B. A., Oropesa, R. S., & Kanan, J. W. (1994). Neighborhood context and residential mobility. Demography, 31, Pp 249-270.
Mesch, G. S., & Manor, O. (1998). Social ties, environmental perception, and local attachment. Environment and Behavior, 30, Pp 504-519.
Morrow-Jones, H.,Wenning, M. V., & Li,Y. (2005). Differences in neighborhood satisfaction between African American and White homeowners. Paper presented at the Association of Collegiate Schools of Planning (ACSP46), Kansas City, MO.
Nasar, J. L. (1988). Perception and evaluation of residential street scenes. In J. L. Nasar (Ed.), Environmental aesthetics: Theory, research, and applications (pp. 275- 289). New York: Cambridge University Press.
Newman, O. (1972). Defensible space; crime prevention through urban design. New York: The MacMillian Company.
Oseland, N. A. (1990). An evaluation of space in new homes. Proceedings of the IAPS Conference Ankara, Turkey, Pp 322-331.
Park, Robert E. & Burgess, Ernest W., (1967) or (1984) or (1992), The City; Suggestions for Investigation of Human Behavior in the Urban Environment,
Potter, J., & Cantarero, R. (2006). How does increasing population and diversity affect resident satisfaction? A small community case study. Environment and Behavior, 38, Pp 605-625.
Rapoport, A. (1982). The meaning of the built environment: a nonverbal communication approach. Beverly Hills: Sage Publications.
Sizemore, Steve, 2004, Urban Eco-villages as an Alternative Model to Revitalizing Urban Neighborhoods: The Eco-village Approach of the Seminary Square/Price Hill Eco-village of Cincinnati, Ohio, A dissertation for the degree of MASTER OF COMMUNITY PLANNING submitted to The University of Cincinnati.
Soja, E. (1980). The socio-spatial dialectic. Annals of the Association of American Geographers, 70, Pp 207-225.
Talen, E., & Shah, S. (2007). Neighborhood evaluation using GIS: An exploratory study. Environment and Behavior, 39(5), Pp 583-615.
Vyankatesh, Terdalkar Sunil, 2004, Revitalizing Urban Neightborhoods: A Realistic Approach to Develop Strategies, A dissertation for Master of Community Planning submitted to University of Cincinnati, Pp 22-23-20-22.
Wilkinson, Derek, 2007, The Multidimensional Nature of Social Cohesion: Psychological Sense of Community, Attraction, and Neighboring, Springer Science+Business Media, pp. 214–229.
Zehner, R. B. (1971, November). Neighborhood and community satisfaction in new towns and less planned suburbs. Journal of the American Institute of Planners (AIP Journal), Pp 379-385.

2017-5-16-1494940409

Vernacular Architecture: college essay help near me

01.1 Background

How sensitive are you of the built environment that you live in? Have you ever come across a building that is rather ordinary but is fascinating and has a story behind it? Have you ever wondered why people build the way they do or why they choose that material over others or even why the building faces in that direction.

Fig 1- Palmyra House Nandgaon, India (Style-Contemporary; Principles-Vernacular)

In answering these questions we need to look at the communities, their identities and their tradition over time and this in essence is what is called “Vernacular Architecture’’.

The purest definition of vernacular architecture is simple…it is architecture without architects. It is the pure response to a particular person’s or society’s building needs. It fulfils these needs because it is crafted by the individual and society it is in. In addition the building methods are tested through trial-and-error by the society of which they are built until their building methods near perfection (over time) and are tailored to the climatic, aesthetic, functional, and sociological needs of their given society. Because the person constructing the structure tends to be the person who will be using it, the architecture will be perfectly tailored to that individual’s particular wants and needs.

Much of the assimilation of the vernacular architecture that we see today in India comes from the trading countries. India is a place which has many different cultures and has seen rapid economic growth over the past few decades which not only transforms people’s lives but also changes everyday environment in which they live, people in the nation are faced daily with the dual challenges. On one hand modernization and on the other preserving the heritage including all their built heritage. This gives us multiple perspectives on vernacular environments and the pure heritage of the country.

Fig 2-A modern adaptation of brick façade along with the contemporary design of the building. https://www.archdaily.com/530844/emerging-practices-in-india-anagram-architects

Gairole House, Gurgaon, Haryana, India

“Vernacular buildings’’ across the globe provide instructive examples of sustainable solutions to building problems. Yet, these solutions are assumed to be inapplicable to modern buildings. Despite some views to the contrary, there continues to be a tendency to consider innovative building technology as the hallmark of modern architecture because tradition is commonly viewed as the antonym of modernity. The problem is addressed by practical exercises and fieldwork studies in the application of vernacular traditions to current problems.

The humanistic desire to be culturally connected to ones surroundings is reflected in a harmonious architecture, a typology which can be identified with a specific region. This sociologic facet of architecture is present in a material, a color scheme, an architectural genre, a spatial language or form that carries through the urban framework. The way human settlements are structured in modernity has been vastly unsystematic; current architecture exists on a singular basis, unfocused on the connectivity of a community as a whole.

Fig 3-Traditional jail screens, Rajasthan, India

Vernacular architecture adheres to basic green architectural principles of energy efficiency and utilizing materials and resources in close proximity to the site. These structures capitalize on the native knowledge of how buildings can be effectively designed as well as how to take advantage of local materials and resources. Even in an age where materials are available well beyond our region, it is essential to take into account the embodied energy lost in the transportation of these goods to the construction site.

Fig 4- Anagram Architects, Brick screen wall: SAHRDC building, Delhi, India

The effectiveness of climate responsive architecture is evident over the course of its life, in lessened costs of utilities and maintenance. A poorly designed structure which doesn’t consider environmental or vernacular factors can ultimately cost the occupant – in addition to the environment – more in resources than a properly designed building. For instance, a structure with large windows on the south façade in a hot, arid climate would lose most of its air conditioning efforts to the pervading sun, ultimately increasing the cost of energy. By applying vernacular strategies to modern design, a structure can ideally achieve net zero energy use, and be a wholly self-sufficient building.

01.2NEED FOR STUDY

Buildings use twice the energy of cars and trucks consuming 30% of the world’s total energy and 16% of water consumption by 2050 they could go beyond 40%

Emitting 3008 tons of carbon which is the main cause of global warming.

In India a quarter of energy that is consumed goes in making and operating the buildings. Also almost half of the materials that we dig out form the ground goes into construction of buildings, roads and other construction projects. Hence, buildings are a very large cause of the environmental problems that we face today. Therefore, it is really important to re-demonstrate that good, comfortable sustainable buildings can play a major role in the improvement of our environment as well as can keep par with the modern designs and can perform even better than them.

The form and structure of the built environment is highly controlled by the factors such as the local area architecture or climate etc. In situations like these we need to study the forms in respect to our environment.

In India there is a whole lot variety of climate ranges and a constant need for developing architecture that will support the environment. We as architects need to study modern designs as well as the functions of the built form in respect to the local climate and cultural context.

VERNACULAR Architecture the simplest form of addressing human needs, is seemingly forgotten in modern architecture .But the amalgamation of the two can certainly aid to a more efficient built form.

However, due to recent rises in energy costs, the trend has sensibly swung the other way. Architects are embracing regionalism and cultural building traditions, given that these structures have proven to be energy efficient and altogether sustainable. In this time of rapid technological advancement and urbanization, there is still much to be learned from the traditional knowledge of vernacular construction. These low-tech methods of creating housing which is perfectly adapted to its local area are brilliant, for the reason that these are the principles which are more often ignored by prevailing architects. Hence, the study of this subject is much needed for better architects of future that are sensitive to the built form and the environment as well.

01.3 AIM

This study aims to explore the balance between the contemporary architecture practices Vis a Vis the vernacular architectural techniques. This work hinges on such ideas and practices as ecological design, modular and incremental design, standardization, and flexible and temporal concepts in the design of spaces. The blurred edges between the traditional and modern technical aspects of building design, as addressed by both vernacular builders and modern architects, are explored.

OBJECTIVE-

The above aim has been divided among the following objectives-

• Study of vernacular architecture in modern context.

• Study of parameters that make a building efficient.

• To explore new approaches towards traditional techniques.

• Study of the built environment following this concept.

• To explore approaches to achieve form follows energy.

01.4 FUTURE SCOPE

The effectiveness of climate responsive architecture is evident over the course of its life, in lessened costs of utilities and maintenance. A poorly designed structure which doesn’t consider environmental or vernacular factors can ultimately cost the occupant – in addition to the environment – more in resources than a properly designed building. For instance, a structure with large windows on the south façade in a hot, arid climate would lose most of its air conditioning efforts to the pervading sun, ultimately increasing the cost of energy. By applying vernacular strategies to modern design, a structure can ideally achieve net zero energy use, and be a wholly self-sufficient building.

Hence, the need to study this approach is becoming more relevant with the modern times.

01.5 HYPOTHESIS

Fusion of the vernacular and contemporary architecture will help in the design of buildings which are more sustainable and connect to the cultural values of people.

01.6 METHODOLOGY

01.7 QUESTIONAIRES

• Is vernacular architecture actually sustainable in today’s context in terms of durability and performance?

• How vernacular architecture has influenced the urban architecture of INDIA?

• Local architecture or modern architecture which is more loved by the locals that are living in the cities compared to the locals living the rural area?

• Will the passive design techniques from vernacular architecture contribute in the reduction of environmental crisis due to increasing pollution and other threats?

• Modern architecture has evolved from the use of concrete to steel and glass and other modern materials. What is the reason that the sustainability of local materials were compromised during these times which led to the underrated statement or a norm that vernacular architecture is village architecture as stated by the majority today ?

02.1Introduction

The discussion and debate about the value of vernacular traditions in the architecture and formation in the settlements in today’s world is no longer polarized.

India undoubtedly has a great architectural heritage which conjures images of Taj Mahal, Fatehpur Sikri, South Indian temples and Forts of Rajasthan. But, what represents Modern Architecture in India.

India has been a country of long history and deep rooted traditions. Here history is not a fossilized past but a living tradition. The very existence of tradition is proof in itself of its shared acceptance over changed time and circumstance, and thus its continuum.

This spirit of adaptation and assimilation continues to be an integral aspect of Indian architecture in the post-independence era as well. As such post-Independence India had voluntarily embraced modernism as a political statement by inviting world renowned Modern architect Le Corbusier to design capital city of young and free nation with democratic power structure.

Despite strong continuum of classical architecture from Indian traditions, these new interventions gained currency and came to be preferred choices for emulation of architects of the following genre. Not only Corbusier, even Louis Kahn, Frank Lloyd Wright and Buck Minster Fuller had their stints in India, Indian masters also got trained and apprenticed overseas, under international masters and continued the legacy forward.

Figure 1 Terracotta Façade –A traditional material used to create a modern design for a façade https://in.pinterest.com/pin/356910339198958537/

02.2Vernacular architecture

02.2.1 Definition

Vernacular architecture is an architectural style that is designed based on local needs, availability of construction materials and reflecting local traditions. Originally, vernacular architecture did not use formally-schooled architects, but relied on the design skills and tradition of local builders.

Figure 2 A Traditional Kerala house https://in.pinterest.com/pin/538672805410302086/

Later in the late 19th century many professional architects started exploring this architectural style and worked while using elements from this style. Many of those architects included people such as Le Corbusier, Frank Ghery and Laurie baker.

Vernacular architecture can also be defined as the “architecture of people “with its ethnic regional and local dialects. It is an aware style of architecture coined by the local builders through their practical knowledge and experiences gained overtime. Hence, Vernacular architecture is the architectural style of the people, by the people, for the people.

02.2.2 Influences on the vernacular

Vernacular architecture is influenced by a great range of different aspects of human behavior and environment, leading to differing building forms for almost every different context; even neighboring villages may have subtly different approaches to the construction and use of their dwellings, even if they at first appear the same. Despite these variations, every building is subject to the same laws of physics, and hence will demonstrate significant similarities in structural forms.

Climate

One of the most significant influences on vernacular architecture is the macro climate of the area in which the building is constructed. Buildings in cold climates invariably have high thermal mass or significant amounts of insulation. They are usually sealed in order to prevent heat loss, and openings such as windows tend to be small or non-existent. Buildings in warm climates, by contrast, tend to be constructed of lighter materials and to allow significant cross-ventilation through openings in the fabric of the building.

Buildings for a continental climate must be able to cope with significant variations in temperature, and may even be altered by their occupants according to the seasons.

Buildings take different forms depending on precipitation levels in the region – leading to dwellings on stilts in many regions with frequent flooding or rainy monsoon seasons. Flat roofs are rare in areas with high levels of precipitation. Similarly, areas with high winds will lead to specialized buildings able to cope with them, and buildings will be oriented to present minimal area to the direction of prevailing winds.

Climatic influences on vernacular architecture are substantial and can be extremely complex. Mediterranean vernacular, and that of much of the Middle East, often includes a courtyard with a fountain or pond; air cooled by water mist and evaporation is drawn through the building by the natural ventilation set up by the building form. Similarly, Northern African vernacular often has very high thermal mass and small windows to keep the occupants cool, and in many cases also includes chimneys, not for fires but to draw air through the internal spaces. Such specializations are not designed, but learned by trial and error over generations of building construction, often existing long before the scientific theories which explain why they work.

Culture

The way of life of building occupants, and the way they use their shelters, is of great influence on building forms. The size of family units, who shares which spaces, how food is prepared and eaten, how people interact and many other cultural considerations will affect the layout and size of dwellings.

For example- In the city of Ahmedabad, the dense fabric of city is divided in pols, dense neighborhoods developed on the basis of its community and its cohesion.Traditionllay the pols are characterized by intricately carved timber framed buildings built around a courtyards with narrow winding streets to ensure a comfortable environment within the Hot Arid climate of Ahmedabad. The design of these settlements also included stepped well and ponds to create a cooler microclimate, these are a great example of ecological sustainability with the Cultural influences.

Figure 3 Mud house Gujrat, Traditional mirror work done on the elevation of the hut https://in.pinterest.com/pin/439875088574491684/

Culture also has a great influence on the appearance of vernacular buildings, as occupants often decorate buildings in accordance with local customs and beliefs.

For example- Warli art a form of representation of stories through simple forms like circles triangles and square are a form of decoration as well as a cultural tradition.

02.2.3 The Indian vernacular architecture

India is a country of great cultural and geographical diversity. Encompassing distinct zones such as the great Thar desert of Rajasthan, the Himalayan mountains, the Indo-Gangetic Plains,the Ganga delta, the tropical coastal region along the Arabian sea and the Bay of Bengal,the Deccan plateau and the Rann of the Kutch, each region has its own cultural identity and its own distinctive architectural forms and construction techniques that have evolved over the centuries as a response to its environmental and cultural setting. A simple unit of the dwelling has many distinct forms which depend on the climate, material available , social and cultural needs of the community.

Indian vernacular architecture the informal, functional architecture of structures, are unschooled informal architectural design and their work reflects the rich diversity of Indian climate, locally available building and materials and intricate variation in local social custom and craftsmanship. It has been estimated that worldwide close to 90% of all buildings is vernacular, meaning that it is for daily use for ordinary, local people and built by local craftsman. The term vernacular architecture in general refers to the informal building structures through traditional building methods by local builders without using the services of a professional architect. It is the most widespread form of building.

Indian vernacular architecture has evolved over time through the skillful craftsmanship of the local people. Despite the diversity, this architecture can be broadly divided into three categories.

• Kachha

• Pakka

• Semi pakka

“Vernacular traditions are a dynamic and creative ‘processes through which people, as active agents, interpret pat knowledge and experience to face the challenges and demands of the present. Tradition is an active process of transmission, interpretation, negotiation and adaptation of vernacular knowledge, skills and experience.”

-Asquith and Vellinga(2006)

IMG-Vellore house, Chennai, India

The architecture that has evolved over the centuries may be defined as the “architecture without architects”

1. KUCCHA BUILDINGS

They are the simplest and most honest form of buildings constructed using materials as per their availability. The practical limitations of the available building material dictate the specific form. The advantages of a Kuccha is that construction materials are cheap and easily available and relatively less labor is required. It can be said the Kuccha architecture Is not built for posterity nut with a certain lifespan in mind after which it will be renewed.

According to Dawson and Cooper (1998), the beauty of kuccha architecture lies in the practice of developing practical and pragmatic solutions to use local materials to counter the environment in the most economically effective manner.

For example in the North East, Bamboo is used to combat a damp, mild climate while in Rajasthan and Kutch ,mud, sunbaked bricks and locally available material is used to mould structures ;in the Himalayas they often use stone and sunken structures to protect themselves from the harsh cold. While in the south, thatch, coconut palms is used to create pitched roofs to confront a fierce monsoon.

MATERIALS-Mud, Grass, Bamboo, Thatch or sticks, Stone, Bamboo, lime

TECHNIQUE OF CONSTRUCTION: Construction of these houses were constructed with earth or soil as the primary construction material. Mud was used for plastering the walls.

IMG-House dwellings in Himalayas with sunken construction and stone used as insulating materials to block winds during harsh winters, HIMACHAL PRADESH

2. PUKKA BUILDINGS

The architectural expression of Pukka is often determined by the establishments or art form which has been developed by the community, such as WARLI paintings. The Pukka buildings are generally built with permanence in mind. Often using locally available materials. Often using locally available materials, the pukka architecture has evolved to produce architectural typologies which are again region specific.

MATERIALS-Stone, brick, clay etc.

TECHNIQUE OF CONSTRUCTION- Construction of their house are done using masonry structure which may be brick or stone, depending upon the locally available material in the region where the structure is constructed, Manual labor is much high in construction of these structure than the kachcha houses.

3. SEMI PUKKA BUILDINGS

A combination of the kachcha and pukka style together forms the semi- pukka. It has evolved as villagers have acquired the resources to add elements constructed of the durable materials characteristic of a pukka house, Its architecture has always evolves organically as the needs and resources of the local people residing in the specific region. The characteristic feature of semi pukka houses are that these houses has walls made from pukka material such as brick in cement/lime mortar, stone, clay tile but the roof construction is done in the kachcha way using Thach, bamboo etc. as the principal material of construction. Construction of these houses employs less manual labor than that of the pukka houses. Thach roofing Mud Adobe walls with Lime plaster.

02.2.4 CLIMATE RESPONSIVE ARCHITECTURE

The Climate of India comprises a wide range across its terrain. Five zones that can be identified in India on the basis on their climate are Cold, Hot and Dry, Composite, Temperate and Warm and humid.

Figure 4Climate zones of INDIA

Source- http://high-performancebuildings.org/climate-zone.php#;

These zones can be further narrowed down to three on the basis of passive techniques used and architectural styles of different regions.

1. HOT AND DRY

2. WARM AND HUMID

3. COLD

• HOT AND DRY

The hot and dry zones of India include Ahmedabad, Rajasthan, Madhya Pradesh and Maharashtra.

A hot and dry climate is characterized by a mean monthly maximum temperature above 30 ºC. The region in this climate is usually flat with sandy or rocky ground conditions.

In this climate, it is imperative to control solar radiation and movement of hot winds. The building design criteria should, thus, provide appropriate shading, reduce exposed area, and increase thermal capacity.

Design Considerations for building in Hot and dry climate-

The hot and dry climate is characterized by very high radiation levels and ambient temperatures, accompanied by low relative humidity. Therefore, it is desirable to keep the heat out of the building, and if possible, increase the humidity level. The design objectives accordingly are:

(A) Resist heat gain by:

• Decreasing the exposed surface

• Increasing the thermal resistance

• Increasing the thermal capacity

• Increasing the buffer spaces

• Decreasing the air-exchange rate during daytime

• Increasing the shading

(B) Promote heat loss by:

• Ventilation of appliances

• Increasing the air exchange rate during cooler parts of the day or night-time

• Evaporative cooling (e.g. roof surface evaporative cooling)

• Earth coupling (e.g. earth-air pipe system)

Figure 5 JODHPUR CITY CLOSELY STACKED HOUSES TO PREVENT HEAT GAIN AND TO PROVIDE SHADE Source- http://www.traveldglobe.com/destination/jodhpur

(a) Planning: Indigenous planning layout was followed for places and simple small dwellings as seen in Shahjahanabad, Jaisalmer and many other cities in India. This type of a dense clustering layout ensured that the buildings were not exposed to the outer sun. This prevents the solar gain and the hot winds from entering the premises and also allows the cold wind to circulate within the building.

Figure 6 Hot and dry region settlement https://www.slideshare.net/sumiran46muz/hot-and-dry-climate-65931347

(b) Waterbodies: Use of waterbodies such as ponds and lakes. These not only act as heat sinks, but can also be used for evaporative cooling. Hot air blowing over water gets cooled which can then be allowed to enter the building. Fountains and water cascades in the vicinity of a building aid this process.

Figure 7 AMBER FORT RAJASTHAN, INDIA A garden is positioned amidst the lake to provide a cooler microclimate for outdoor sitting.

Source-https://commons.wikimedia.org/wiki/File:Maota_Lake.JPG

Figure 8 Earth berming technique: Evaporative cooling through water feature Source-http://mnre.gov.in/solar-energy/ch5.pdf

(c) Street width and orientation: Streets are narrow so that they cause mutual shading of buildings. They need to be oriented in the north-south direction to block solar radiation.

Figure 9 Design techniques in Hot and dry regions Source-http://mnre.gov.in/solar-energy/ch5.pdf

(c) Open spaces and built form: Open spaces such as courtyards and atria are beneficial as they promote ventilation. In addition, they can be provided with ponds and fountains for evaporative cooling.

Courtyards act as heat sinks during the day and radiate the heat back to the ambient at night. The size of the courtyards should be such that the mid-morning and the hot afternoon sun are avoided. Earth-coupled building (e.g. earth berming) can help lower the temperature and also deflect hot summer winds.

Figure 10 Courtyard planning of Hot and dry region Source-http://mnre.gov.in/solar-energy/ch5.pdf

(2) Orientation and planform

An east-west orientation (i.e. longer axis along the east-west), should be preferred. This is due to the fact that south and north facing walls are easier to shade than east and west walls.

It may be noted that during summer, it is the north wall which gets significant exposure to solar radiation in most parts of India, leading to very high temperatures in north-west rooms.

For example, in Jodhpur, rooms facing north-west can attain a maximum temperature exceeding 38 ºC. Hence, shading of the north wall is

Imperative.

The surface to volume (S/V) ratio should be kept as minimum as possible to reduce heat gains.

Cross-ventilation must be ensured at night as ambient temperatures during this period are low.

(3) Building envelope

(a) Roof: The diurnal range of temperature being large, the ambient night temperatures are about 10 ºC lower than the daytime values and are accompanied by cool breezes. Hence, flat roofs may be considered in this climate as they can be used for sleeping at night in summer as well as for daytime activities in winter.

Figure 11 Flat roof for reverse heat gain during night Source-http://mnre.gov.in/solar-energy/ch5.pdf

The material of the roof should be massive; a reinforced cement concrete (RCC) slab is preferred to asbestos cement (AC) sheet roof. External insulation in the form of mud phuska with inverted earthen pots is also suitable. A false ceiling in rooms having exposed roofs can help in reducing the discomfort level.

Evaporative cooling of the roof surface and night-time radiative cooling can also be employed. In case the former is used, it is better to use a roof having high thermal transmittance (a high U-value roof rather than one with lower U-value). The larger the roof area, the better is the cooling effect.

The maximum requirement of water per day for a place like Jodhpur is about 14.0 kg per square meter of roof area cooled. Spraying of water is preferable to an open roof pond system. One may also consider of using a vaulted roof since it provides a larger surface area for heat loss compared to a flat roof.

(b) Walls: In multi-storeyed buildings, walls and glazing account for most of the heat gain. It is estimated that they contribute to about 80% of the annual cooling load of such buildings .So, the control of heat gain through the walls by shading is an important consideration in building design.

(c) Fenestration: In hot and dry climates, minimizing the window area (in terms of glazing) can definitely lead to lower indoor temperatures. It is found that providing a glazing size of 10% of the floor area gives better performance than that of 20%. More windows should be provided in the north facade of the building as compared to the east, west and south as it receives lesser radiation during the year. All openings should be protected from the sun by using external shading devices such as chajjas and fins.

Moveable shading devices such as curtains and venetian blinds can also be used. Openings are preferred at higher levels (ventilators) as they help in venting hot air. Since daytime temperatures are high during summer, the windows should be kept closed to keep the hot air out and opened during night-time to admit cooler air.

Figure 12 Louvers for providing shade and diffused lighting

http://www.nzdl.org

The use of ‘jaalis’(lattice work) made of wood, stone or RCC may be considered as they

Allow ventilation while blocking solar radiation.

(a) Color and texture: Change of color is a cheap and effective technique for lowering

Indoor temperatures. Colors having low absorptivity should be used to paint the external surface. Darker shades should be avoided for surfaces exposed to direct solar radiation. The surface of the roof can be of white broken glazed tiles (china mosaic flooring). The surface of the wall should preferably be textured to facilitate self-shading.

Remarks: As the winters in this region are uncomfortably cold, windows should be designed such that they encourage direct gain during this period. Deciduous trees can be used to shade the building during summer and admit sunlight during winter. There is a general tendency to think that well-insulated and very thick walls give a good thermal performance. This is true only if the glazing is kept to a minimum and windows are well-shaded, as is found in traditional architecture.

However, in case of non-conditioned buildings, a combination of insulated walls and high

Percentage of glazing will lead to very uncomfortable indoor conditions. This is because the building will act like a green house or oven, as the insulated walls will prevent the radiation admitted through windows from escaping back to the environment. Indoor plants can be provided near the window, as they help in evaporative cooling and in absorbing solar radiation. Evaporative cooling and earth-air pipe systems can be used effectively in this climate. Desert coolers are extensively used in this climate, and if properly sized, they can alleviate discomfort by as much as

90%.

• Warm and humid

The warm and humid climate is characterized by high temperatures accompanied by very

High humidity leading to discomfort. Thus, cross ventilation is both desirable and essential.

Protection from direct solar radiation should also be ensured by shading.

The main objectives of building design in this zone should be:

(A) Resist heat gain by:

• Decreasing exposed surface area

• Increasing thermal resistance

• Increasing buffer spaces

• Increasing shading

• Increasing reflectivity

(B) To promote heat loss by:

• Ventilation of appliances

• Increasing air exchange rate (ventilation) throughout the day

• Decreasing humidity levels

The general recommendations for building design in the warm and humid climate are as follows:

(1) Site

(a) Landform: The consideration of landform is immaterial for a flat site. However, if there

are slopes and depressions, then the building should be located on the windward side or crest to take advantage of cool breezes.

(b) Waterbodies: Since humidity is high in these regions, water bodies are not essential.

(c) Open spaces and built form: Buildings should be spread out with large open spaces for

Unrestricted air movement. In cities, buildings on stilts can promote ventilation

and cause cooling at the ground level.

(d) Street width and orientation: Major streets should be oriented parallel to or within 30º of the prevailing wind direction during summer months to encourage ventilation in warm and humid regions. A north-south direction is ideal from the point of view of blocking solar radiation. The width of the streets should be such that the intense solar radiation during late morning and early afternoon is avoided in summer.

(2) Orientation and planform

Since the temperatures are not excessive, free plans can be evolved as long as the house is under protective shade. An unobstructed air path through the interiors is important. The buildings could be long and narrow to allow cross-ventilation. For example, a singly loaded corridor plan (i.e. rooms on one side only) can be adopted instead of a doubly loaded one. Heat and moisture producing areas must be ventilated and

Separated from the rest of the structure (Fig. 5.21) [8]. Since temperatures in the shade are not very high, semi open spaces such as balconies, verandahs and porches

can be used advantageously for daytime activities. Such spaces also give protection from rainfall. In multistoreyed buildings a central courtyard can be provided with vents at higher levels to draw away the rising hot air.

(3) Building envelope

(a) Roof: In addition to providing shelter from rain and heat, the form of the roof should be planned to promote air flow. Vents at the roof top effectively induce ventilation and draw hot air out. As diurnal temperature variation is low, insulation does not provide any additional benefit for a normal reinforced cement concrete (RCC) roof in a non-conditioned building.

However, very thin roofs having low thermal mass, such as asbestos cement (AC) sheet roofing, do require insulation as they tend to rapidly radiate heat into the interiors during

daytime.

Fig- Padmanabhapuram Palace

A double roof with a ventilated space in between can also be used to promote air flow.

(a) Walls: As with roofs, the walls must also be designed to promote air flow. Baffle walls, both inside and outside the building can help to divert the flow of wind inside .They should be protected from the heavy rainfall prevalent in such areas. If adequately sheltered, exposed brick walls and mud plastered walls work very well by absorbing the humidity and helping the building to breathe. Again, as for roofs, insulation does not significantly improve the performance of a non-conditioned building.

(b) Fenestration: Cross-ventilation is important in the warm and humid regions. All doors and windows are preferably kept open for maximum ventilation for most of the year. These must be provided with venetian blinds or louvers to shelter the rooms from the sun and rain, as well as for the control of air movement.

Openings of a comparatively smaller size can be placed on the windward side, while the corresponding openings on the leeward side may be bigger for facilitating a plume effect for natural ventilation. The openings should be shaded by external overhangs. Outlets at higher levels serve to vent hot air. A few examples illustrating how the air movement within a room can be better distributed, are shown in figures below-

(c) Color and texture: The walls should be painted with light pastel shades or whitewashed, while the surface of the roof can be of broken glazed tile (china mosaic flooring). Both techniques help to reflect the sunlight back to the ambient, and hence reduce heat gain of the building. The use of appropriate colors and surface finishes is a cheap and very effective technique to lower indoor temperatures. It is worth mentioning that the surface finish should be protected from/ resistant to the effects of moisture, as this can otherwise lead to growth of mould and result in the decay of building elements.

Remarks: Ceiling fans are effective in reducing the level of discomfort in this type of climate. Desiccant cooling techniques can also be employed as they reduce the humidity level. Careful water proofing and drainage of water are essential considerations of building design due to heavy rainfall. In case of air-conditioned buildings, dehumidification plays a significant role in the design of the plant.

Architecture for hot and humid climate from Asmita Rawool

Figure 13 Traditional Kerala house

Parameters for sustainability in Warm and Humid Climate

Ecological Site planning The house is generally designed in response to ecology-the backwaters, plantations etc. allowing the building to effortlessly blend in to the landscape of coconut, palm and mango trees etc.

The house is divided in to quarters according to “Vastu Shastra”. It is generally desirable to build the house in the south west corner of the north-west quadrant. The south east corner is reserved for cremation purposes while the north-east corner has a bathing pool.

Local Materials The building is made from locally available stone and timber and terracotta tiles for roof.

Physical Response to climate The plan is generally square or rectangular in response to the hot and humid climate. The central courtyard and the deep verandas around the structure ensure cross ventilation. The south west orientation of the house prevent harsh sun rays from penetrating the house. Sloping roofs designed to combat heavy monsoon of the region. The overhanging roofs with projecting caves help to provide shade and cover up the walls from the rain.

Embodied energy The building use materials like Stone and timber which are a reservoir of embodied energy and have the potential to be recycled or reused.

Socio-Economic Adaptability Toilets have been integrated into the design of the house and RCC (Reinforced cement concrete) has been introduced to build houses with larger spans.

2017-12-6-1512573056

Thomas D’Arcy McGee – Canadian Figure

Thomas D’Arcy McGee is a historical figure who as Charles Macnab states “was the first political leader in Canada to be assassinated.” McGee is referred to by historian Alexander Brady as “[having] a unique place among the Canadian statesman of his time.” Canadian Archives states that Thomas D’Arcy McGee “was born in Carlingford, Ireland, the son of James McGee, and Dorcas Catherine Morgan.” It was during his childhood where Mcgee’s knowledge was made known to the members outside of the family, As explained by author T.F Slattery “a hedge schoolmaster, Michael Donnelly, helped him along with his books and fertilized his dreams.” When asked about McGee as a student Donnelly referred to McGee as the “brightest scholar [he] ever taught.” Thomas D’Arcy McGee did not live in Ireland his entire life as “in 1842 McGee left Ireland and travelled to North America.” T.P Slattery details the trip when he writes “McGee left Ireland with his sister Dorcas to go and live in Providence Rhode Island.” Once arriving in America McGee became a publisher Charles Macnab states “he was publishing his New York Nation at New York, Boston, and Philadelphia, and shipping it to Ireland, assuming a nationalist leadership as best he could over the remnants of Ireland.” He would spend a long time in America before “in the spring of 1857 McGee moved to Montreal, at the invitation of leaders of that city’s Irish community who expected him to promote their interest.” Thomas D’Arcy McGee made the jump into the political life in Canada as “he was elected to the Legislative assembly in December of 1857… He joined the cabinet of John Sandfield MacDonald in 1862, and chaired that year’s Intercolonial Railway conference at Quebec City.” However that career ended tragically, as stated by author James Powell “Thomas D’Arcy McGee, the much revered Canadian statesman, and orator, died by an assassin’s bullet on April 7, 1868 entering his boarding house on Sparks Street.” This death is significant as in a letter “Lady Agnes MacDonald, the prime minister’s wife… ‘McGee is murdered.. lying in the street.. shot through the head.” Thomas McGee was a great public speaker, and highly intelligent figure who played a role Canadian history as being the victim of one of the first political assassinations in the history of the country.

When thinking about infamous Canadian historical figures, one name that factors into the story of Canada is Thomas D’Arcy McGee. As historian T.P Slattery explains “Thomas D’Arcy McGee was born on Wednesday, April 13, 1825, in Carlingford Ireland on the Rosstrevor coast.” Thomas was raised by parents “James McGee and Dorcas Catherine Morgan.” Carlingford would just be McGee’s first residence as “When D’Arcy was eight, the family moved south to Wexford.” It was tragically during this time that “[Mcgee’s] mother was the victim of an accident and died on August 22, 1833. This was a heavy blow.” Thomas D’Arcy McGee was an Irish born citizen who at a young age lost a central figure in his life.

The importance Thomas D’Arcy McGee’s mother had on his life was influencing him on his ideological beliefs. One factor of influence was a nationalistic belief as Alexander Brady states “she was a woman who cherished the memory of her father’s espousal of the national cause and preserved all his espousal of the national cause and preserved all his national enthusiasms which she sedulously fed to her son.” McGee also developed his knowledge on Irish literature from his mother, as Brady states “She was interested in all of the old Irish myths and traditions and poetry, and these she related to her [son].” Thomas D’Arcy Mcgee’s nationalist ideologies were influenced upon him at a young age, and that led McGee to become “an ardent idealist for the nationality of his country.”

Thomas D’Arcy McGee was a highly intelligent human from a young age. T.P Slattery in his novel writes “A hedge schoolmaster, named Michael Donnelly, helped him along with his books.” Donnelly was a mentor figure to McGee as he helped him in regards to his schooling. When asked about McGee as an academic, Donnelly replied stating that McGee was “the brightest scholar I ever taught.” Thomas D’Arcy McGee was also a great public speaker. As T.P Slattery writes “In Wexford, D’Arcy had a boyish moment of triumph when he gave a speech before the Juvenile Temperance society, and Father Matthew, who happened to be there, reached over and tousled his hair.” Author Alexander Brady writes in regards to McGee’s performance that it was “delivered before the society a spell-binding oration, on which he received the hearty congratulations… This was [Mcgee’s] first public speech.” Thomas D’Arcy McGee was a very unique human being with a high level of intelligence and a gift for public speaking both noticeable at a young age.

Not a lifelong citizen in Ireland, Thomas D’Arcy McGee moved on in to another chapter in his life. There were multiple reasons that led to McGee leaving Ireland, one of which being that “Mcgee’s father had married again, and the stepmother was not popular with the children.” Another reason of McGee moving had to deal with the economic realities of Ireland as explained by Alexander Brady “The economic structure of Irish society was diseased. Approximately seven million were vainly endeavouring to wring a lean subsistence from the land, and hundreds of thousands were on the verge of famine.” With that information in mind Robin Burns explains on biographi.ca that “D’Arcy McGee left for North America in 1842, one of almost 93000 Irishmen who crossed the Atlantic that year.” Thomas set out from Ireland “on April 7th McGee [who] was not yet seventeen… with his sister Dorcas to go and live with their aunt, in Providence Rhode Island.” Thomas D’Arcy McGee had just set out on a new chapter in his life

Thomas D’arcy McGee arrived in America with “few material possessions beyond the clothes on his back.”. One of the first things that happened upon landing in America was that he presented a speech as TP Slattery explains “he was on his feet speaking at an Irish assembly.” It was at this speech where Mcgee’s feelings were stated towards the “British rule in Ireland,” as he states “the sufferings which the people of that unhappy country have endured at the hands of a heartless, bigoted despotic government are well known… Her people are born slaves, and bred in slavery from the cradle; they know not what freedom is.’ This message condemning the British rule over the Irish made an impact as it got Thomas D’Arcy McGee into a new profession while in the United States as a writer where he “joined the staff of the Boston Pilot.”

Thomas D’Arcy McGee had just moved to America, and “within weeks he was a journalist with the Boston Pilot, the largest Irish Catholic paper in the [United] [States].” With his new role, Thomas D’Arcy McGee was described as “the pilot’s traveling agent, [who] for the next two years travelled through New England collecting overdue accounts and new subscribers.” While being apart of these trips, McGee became connected a group known as “young Ireland Militants in Dublin.” One of the key members of the group was “Daniel O’Connell [who] held to a non violent political philosophy, but in 1843 he followed a change in strategy when he allowed some of the young militants who had joined the association after 1841 to plan and manage a series of rallies of hundreds of thousands across Ireland to hear him.” One of the members of the young Ireland group was “a young Ireland moderate Gavan Duffy who was the publisher of the Nation.” It was through the Nation that McGee became connected to the Young Ireland’s as Gavan Duffy had interest in him. Duffy had for a long time been a fan of McGee as Charles Macnab writes “Duffy had been impressed enough with young McGee to have engaged with him almost immediately to write a volume for Duffy’s library of Ireland series.” Hereward Senior details Duffy’s interest in Mcgee’s ability as he writes “the talents of D’Arcy McGee were recognized by Duffy, editor of the Nation who invited McGee to join its staff and McGee subsequently became part of the “Young Ireland” group.” In just a short amount of time, Thomas D’Arcy McGee had gone from arriving in the United States to now becoming recognized by the public for his ability as a writer.

Outside of his professional life, one of the actions that Thomas D’Arcy made a deeply personal decision in his life. As explained by historian David Wilson “On Tuesday, 13 July he married Mary Theresa Caffrey, whom he met at an art exhibition in Dublin.” The two connected over “their love in romantic poems, and letters show that she cared deeply about him.” The travel that McGee made however took a toll on the marriage, as Wilson states “They were torn apart by exile and continually uprooted as McGee moved from Dublin to New York, Boston, Buffalo, back to New York… When McGee was on the road Mary experienced periods of intense loneliness; when he was at home she often had to deal with his heavy drinking.”. The family suffered through tragedy as “of their five children, only two survived into adulthood.” However with all that tension and tragedy which happened there was still a connection between the family as Wilson writes “there was great affection and tenderness within the family, as Mcgee’s letters to his children attest. Mary continued to write of ‘my darling Thomas’, until the end of her days.” Outside of his workings as a writer, Thomas D’Arcy McGee did have his personal life with his family, whom he evidently cared for.

Mcgee’s last movements around Ireland came from incidents that he witnessed while in the country. This incident happened in the year 1847 as “the Irish confederation was frustrated in the general election, and a radical faction developed calling for armed action.” As explained by historian Hereward Senior “The young Irelanders were converted to the idea of a barricade revolution carried out by a civilized militia. They conspired to re-enact the French revolution on Irish soil. These young Irelanders were more attracted by the romance of revolution than by the republican form of government.” Thomas D’Arcy McGee had been a member in this revolution as Alexander Brady explains “he [consulted] the Irish revolutionists in Edinburgh, and Glasgow and enrolled four hundred volunteers.” However the downfall of McGee’s time in the revolution came as “he was arrested for sedition on the eve of his first wedding anniversary, the charges though are dismissed the next day.” This led to McGee ultimately leaving Ireland as “with a sad heart [McGee] boarded a brig at the mouth of the Foyle and sailed for the United States… In America he began at the age of twenty three a new life destined to plead for causes to prove more successful than the Irish independence.” This was the end of Thomas D’Arcy McGee’s life in Ireland.

Upon returning to the United States, Thomas D’Arcy McGee had moved to a different chapter in his life. He began writing papers one of the papers written was known as the New York Nation, as explained by Charles Macnab McGee “was publishing his New York Nation at New York, Boston, and Philadelphia.” Macnab also made sure that his paper reach Ireland, as Macnab writes “McGee shipped the paper to Ireland assuming a nationalist voice as best he could over the remnants of Young Ireland and the future political and cultural directions of the Irish world.” McGee made it clear in regards to this paper that he was willing to take a radical approach as explained by David Wilson one of his enemies being the catholic church. Wilson writes in his book “the reference [McGee] [Makes] to “priestly preachers of cowardice was pivotal; the catholic church had transformed heroic Celtic warriors into abject slaves. “The present generation of Irish Priests” he wrote, ‘have systematically squeezed the spirit of resistance out of the hearts of the people.’” In response to the criticism of the catholic church, the church condemned the Thomas D’Arcy McGee in a statement made by author being “Bishop John Hughes,” who described Mcgee’s writings as having “transferred the ‘odium of oppression’ from the British government to the catholic clergy.” The demand that Hughes made was that “unless the Nation shall purify its tone.. let every diocese, every parish, every catholic door be shut against it.” The eventual result of the Nation was explained by T.P Slattery stating that “The Mcgee’s were just in time to witness the collapse of his New York Nation… He moved on to Boston planning to sail back to Ireland.”

Thomas D’arcy Mcgee didn’t end up moving to Ireland as T.F Slattery writes “Mcgee postponed his return to Ireland and remained with his young family in Boston. There he pucked up a few fees lecturing.” As explained in the Quebec history the next chapter of Mcgee’s life happened “in 1950 McGee moved to Boston and founded the American Celt, and in 1952 he moved to buffalo where he published the American Celt for five years.” The purpose of the Celt as explained by T.P Slattery was to focus on “aid for the ancient missionary schools; encourage the Irish industrial enterprise, develop literature, and revive the music of Ireland.” The audience that was intended for the Boston Nation was “Irish worked who were irritated by the unexciting views of the Boston Pilot, and took for granted that McGee would be more to their taste as a rebel.” While in America McGee was also a novelist, as he published multiple writings about the Irish people. Examples of these writings are “A history of Irish settlers in North America (1851) to demonstrate that the Irish had made significant contribution to the history of North America.” McGee also wrote three other books titled “A history of the attempts to establish protestant reformation in Ireland (1853), the Catholic history of North America 1855), and the life of Rt. Rev Edward Maginn (1857).” In the same year of writing his last book a new chapter on Mcgee’s life was opened as “In 1857 he moved from Buffalo to Montreal, Lower Canada at the invitation of some Irish Canadians.” Thomas D’Arcy McGee was now moving to his third country.

While in Canada Thomas D’Arcy McGee continued continued writing. As Hereward Senior “Upon his arrival in Montreal McGee started to publish the New Era.” Mcgee’s new paper was quite significant to Canadian history as “a series of editorial and speeches by D’Arcy McGee had become historic. They constitute the evidence that McGee was the first of all the fathers of confederation to advocate a federal basis for a new nation.” What Slattery is implying is that McGee was the first major endorser of the formation of what would become known as Canada. Slattery continues in writing that “It began unnoticed in an article of June 27 called “queries for Canadian Constituencies,” with an acute analysis of some of the practical issues. This led the way to three important editorials… written on August 4,6, and 8 1857.” McGee’s writings from the New Era led to the next major decision in Mcgee’s life as “In December 1857 D’Arcy McGee was one of three members elected to represent Montreal in the Legislative Assembly. He had been nominated by the St Patrick’s society of Montreal.”

In regards to what McGee discussed in his editorials for the New Era, T.P Slattery states “The first editorial stressed the need for union as distinct from uniformity. The second was on the role of the French language, and the third, was on confederation.” In the first editorial McGee explained that “Uniform currency was needed “Uniform currency was needed; so were a widespread banking and credit system, the establishment of courts of last resort and an organized postal system “one is much more certain of his letters from San Francisco,” The next editorial McGee writes discusses is based on “the quality of Quebec,” which McGee discussed in an editorial on April 6th 1858 “urging parliament to adopt the proposals for federation which were to be introduced by Alexander Galt.. ‘we are in Canada two nations, and most mutually respect each other. Our political union must, to this end be made more explicitly we are to continue for the most general purpose as a united people.” The third editorial presented by McGee States “‘the federation of feeling must precede the federation of fact’. that epigram not only exposed the weakness of previous unions; it expressed Mcgee’s passion to arouse such a spirit, so a new people could come together in the north.” To specify the overall political philosophy Slattery States “[McGee] was a devoted student of Edmund Burke for theory, and of Daniel O’Connell for practice. His studies sharpened by his intelligence, and corrected as he matured through his sharper experiences.” With his political ideology out in the open Thomas D’arcy Mcgee had his path to “springboard for his start in Canadian politics. In December of 1857 he was elected to the Legislative assembly of the province of Canada.”

Thomas D’Arcy McGee had gotten into a new profession which ended up being politics. As Quebec history states “in 1958 McGee was elected as an Irish Roman Catholic to the Legislative assembly of Canada for Montreal west. A constituency which he represented until 1867, and he was re-elected for or to the house of commons of the new dominion.” He sat with the reform government of George Brown in 1858.” As Alexander Brady explains in his reasoning for supporting Brown “McGee was won by Brown’s frank, fearless character. Moreover, he believed that the Irish catholics could subscribe with little reservation to the reform leader’s principles.” One of the Principles that McGee agreed on with Brown was “a hostility to the intolerant Toryism of the old school and entertained his faith in the extension of popular suffrage economy in public expenditure and reduction of taxes.” Once the government returned in “March 1858, the parliamentary session began. From the outset McGee hurried into the leading debates and attacked the corruptions as the government party was descried, with all the weapons of wit and searching sarcasm.” What McGee was known for during his early years in Government was what he had been great at his whole life. Alexander Brady explains this talent when he writes a local reporter from “the globe wrote that [McGee] was undoubtedly the most finished orator in the house… he had the power of impression an audience accounted for by attributing to those which can only be accounted for by attributing to those who possess it some magnetic influence not common to everyone.” McGee may now have moved from a writer to a politician, however his childhood abilities as a public speaker had stayed with him.

Life for Thomas D’Arcy McGee in the Brown political party was not always phenomenal. As explained by David Wilson “the reform party began to alarm its French Canadian wing. Sensing an opportunity, the liberal Conservatives moved a non confidence resolution against the government.” This led to an area of debate in which “all the leading figures in government defended its record- all of them except McGee, who was getting drunk with friends when he was scheduled to speak. His erratic behaviour was symptomatic of deeper disillusionment with the reform party.” With Mcgee’s behaviour in question the government made its move to deal with him as agreements were reached between the leaders of the reform party that “a new reform government must abandon the Intercolonial railway, and that there would be no place of McGee in the new cabinet.” What also alienated Mcgee’s standing was in regards to political ideology, as David Wilson explains “McGee was a loose canon whose position on separate schools alienated the clear grits, whose position on separate schools alienated Rouges. For the members of the Reform party McGee had become a liability.” This was the beginning of the end for McGee in the reform party as “McGee felt that he had been strongly stabbed in the back by his own colleagues.” Feeling alienated by the members of his party Thomas D’Arcy McGee “transferred his allegiance to the conservatives, where he became minister of agriculture in the MacDonald Government of 1864.” McGee had thus crossed the political aisle embracing a new party.

As a member of the John A Macdonald party Mcgee’s status had increased. As explained in Canadian archives

In 1864 McGee had helped to organize the Canadian visit, a diplomatic goodwill tour of the Maritimes that served as a prelude to the first confederation conference. During this tour, Magee delivered many species in support of union and lived up to his reputation as the most talented politician of the era. He was a delegate to the Charlottetown conference and the Quebec conference. In 1865 he delivered two speeches on the union of the provinces, which subsequently bound and published.

Moments from McGee in regards to the two conferences are explained by T.P Slattery who writes, during the Quebec conference “McGee speaking with an ease of manner moved an amendment. He proposed that the provision be added to the provincial power over education… Andrew Archibald MacDonald, sitting at the far and of the table to Mcgee’s left seconded the amendment.” In explaining the logic behind his speech McGee stated “saving the rights and privileges which the protestant or catholic minority in both Canadas may possess as to their denomination schools when the constitutional act goes into operation.” In regards to the Charlottetown conference Mcgee’s major contribution as explained by David Wilson was “his principal contribution to the Charlottetown conference lay not in the formal proceedings but in the whirl of social events that surround the meetings- the dinner parties and luncheons, and the grand ball at the government house.” The effect that McGee had on these meetings was noticeable as “historians of confederation had pointed out, these events were important in creating a climate of camaraderie and allowing new friendships to form. At a liquid lunch on board the Victoria “Mcgee’s wit sparkled brightly as the wine,” the mapped was euphoric that the delegate proclaimed the banns of matrimony among the provinces.” Though Mcgee’s role in the Charlottetown conference was described by Wilson as “a secondary and often marginal role in the negotiation between Canada and the Maritimes… No other Canadian politician knew the maritimes better than McGee.” Hence McGee was more of an advertiser to the Maritime colonies with the goal of convincing them into join confederation.

The goal that McGee had played a role in aiming to accomplish finally had been accomplished. However as Alexander Brady states “In November 1866, the delegation of ministers appointed to represent Canada at the final drafting of the federal constitution sailed for England. McGee was not a member of that party.” This began Mcgee’s role declining in government as Hereward Senior explains “John A. MacDonald found it more convenient to draw the representative of the Irish Catholic community from the maritimes.” With that reality in mind Thomas D’Arcy McGee “prepared to run in his old constituency in Montreal west.” It was here that Thomas D’Arcy McGee faced off with a new foe.

The Fenian movement is explained by author Fran Reddy as she writes

The Irish Fenian Brotherhood movement spurred along the idea of union among the British North American Colonies had spurred along the idea of union amongst the British North American colonies. Due to increasing skirmishes along the border as the Fenians tried to move in from the United states to capture British North America colonies, believing that they could hold these as ransom to bargain for Ireland’s independence from British rule.

Why there are relevant to Thomas D’Arcy McGee is due McGee making an enemy with the Fenians when “in 1866 he condemned with vehemence the Irish American Fenians who invaded Canada; and in doing so he incurred the enmity of the Fenian Organization of the United States.” This played a role in the election that McGee was trying to win in Montreal as “In Montreal the Fenians were able to find allies amongst the personal and political enemies of McGee.” This movement caused an effect in the political life of McGee as “At the opening of the election campaign, Magee wrote to John A. MacDonald that he had decided not to go to Toronto, as it would provide the Grit Fenians” with an opportunity to offer him insults.” The attempt to stop McGee from getting elected failed as “McGee won by a slight majority in Montreal west,” thus regaining his old seat in government. The feelings the Fenians had of McGee influenced Thomas D’Arcy Mcgee’s time in government as explained by Canadian archives “Thomas D’Arcy McGee was seen as a traitor by the very Irish Community that he sought to defend, and by 1867 [Mcgee] expressed a desire to leave politics.”

However Thomas D’Arcy McGee would not get his wish of leaving the political scene, as Alexander Brady describes in detail the final moments of his life. As Brady writes “[McGee] spoke at midnight. Shortly after one on the morning of the 7th the debate closed. The members commented generally on Mcgee’s speech; some thought it was the most effective that they had ever heard him deliver.” After the evening concluded there was a new positive mood on McGee as “Perhaps part of the lightheartedness was caused by this reflection that on the morrow he would return to Montreal, where his wife and daughters were within a few days to celebrate his forty third birthday.” While continuing on McGee ended the evening as “he left his friend and walked to his loving on Sparks street. As he entered a slight figure glided up and at close range fired a bullet into his head. His assassin dashed away in the night, but left tell tale steps in the snow later to assist in his conviction.” The news of Mcgee’s death had spread quickly amongst Canada, one person who received the news was “Lady Agnes MacDonald the prime minister’s wife,” in which she states “The answer came up clear and hard through the cold moonlit morning: “McGee is murdered… lying in the street shot through the head.” The scene of the death was described by a witness on the scene “Dr. Donald McGillivray” in which he states “about half past two I was called and told that D’Arcy McGee had been shot at the door of his boarding house. I went at once. I found his body lying on its back on the sidewalk.” Thomas D’Arcy Mcgee’s life had come to an end.

The search for Mcgee’s killer led authorities to a man named “Patrick James Whelan who was convicted and hanged for the crime.” As Slattery explains “The police moved fast. Within twenty hours of the murder they had James Whelan in handcuffs. In Whelan’s pocket they found a revolver fully loaded. One of its chambers appeared to have been recently discharged.” There is also more evidence against Whelan as explained by Charles Macnab “Minutes before his execution, Patrick James Whelan admitted that he was present when McGee was shot.” Also presented during the trial was Whelan’s role in Mcgee’s campaign as written by Hereward Senior “his presence in Prescott during Mcgee’s campaign there, his return to Montreal when McGee returned, and his taking up employment in Ottawa when McGee took his seat in parliament all suggest he was stalking McGee.”

The main theory during the trial was that Whelan was a Fenian, which would make sense as they were the major enemy of McGee. However as Senior exclaimed “Whelan insisted he wasn’t a Fenian.” What Whelan was identified in was in fact called “the Ribbonmen, however Whelan was unquestionably under the influence of Fenian propaganda and engaged in clandestine work on their behalf.” There was a controversial moment in the trial as explained by T.F Slattery “The prisoner had come back from court and was telling what had happed. James Whelan did not say “he shot McGee like a dog’ but that Turner had sworn he heard Whelan say, “he’d shoot McGee like a dog,’ The prisoner asserts that his words have been twisted.” The trial resulted in a guilty verdict as “Whelan maintained his innocence throughout his trial and was never proven to be a Fenian. Nonetheless he was convicted of murder and hanged before more than 5000 onlookers on February 11th 1869.”

The funeral was very non luxurious as Charles Macnab states “The body was not handed over for a proper catholic burial. Instead it was buried in a shallow grave in the jail yary. There was fear of a massive fenian demonstration at Whelan’s funeral.” The status of Mcgee as a public figure was made evident by the amount of attendees at his funeral. As exclaimed by T.P Slattery “The population of the city was then one hundred thousand, but there was so many visitors for D’Arcy Mcgee’s funeral that the population had practically doubled.” Amongst the attendees were “Newspaper reporters who estimated the number marching and gathered along the long route wrote that a hundred thousand people participated in the demonstration of mourning.” In regards to the legacy of Thomas D’Arcy McGee Alexander Brady states “such material bases of union must fail to hold together different sects and races inhabiting the dominion, unless Canadians cherish what McGee passionately advanced, the spirit of toleration and goodwill, as the best expression of Canadian nationality.” David Wilson gives the perfect summary of who Thomas D’Arcy McGee was when he writes “For the myth makers, here was the ideal symbol of the Celtic contribution to Canadian nationality- an Irish catholic Canadian who became the youngest of the fathers of confederation, who was widely regarded as an inspirational and visionary Canadian nationalist and who articulated the concept of unity in diversity a century before it became the dominant motif of Canadian identity.” Thomas D’Arcy McGee was a very important public figure in Canadian history who met a tragic and unfortunate demise due to an assassination.

Thomas D’Arcy McGee was an Irish citizen born in the city of Carlingford Ireland. He moved at a young age, and during that time he dealt with the tragic loss of his mother who was killed in an accident. Mcgee’s Irish nationalist ideology was inspired by his mother, an ideology which played a major role in Mcgee’s life. Thomas D’Arcy McGee was a highly intelligent individual as while in his new location of Wexford a man who helped McGee in his studies said Thomas D’Arcy McGee was “the brightest scholar I have ever taught.” During his teenage years McGee moved over to the United States where over the next few years he published multiple papers which helped him gain the eye of an Irish nationalist organization. During this time Thomas D’Arcy McGee began his family by getting married in Dublin Ireland. Thomas D’Arcy Mcgee’s time in Ireland however came to an end as he was nearly arrested, a threat worthy enough of him moving back to America. While in America McGee went from New York to Boston publishing papers with a pro Ireland ideology. These papers led to the next chapter of Mcgee’s life involving his move to Canada, specifically Montreal. In Montreal McGee founded a new paper titled the Montreal Era, and in the Montreal Era he promoted what became known as Confederation. This led to Thomas D’Arcy McGee getting into politics in Montreal, where he became a member of the Reform Party of George Brown. While in the Reform party McGee was exposed as a loose canon, with views that split the party ideology, and he was also known for being an alcoholic. With that revelation McGee was angry to the point of which he joined the party in power under the leadership of John A. Macdonald. Thomas D’Arcy McGee played a role in Canadian Confederation as he attended both the Quebec, and Charlottetown conferences, leading to the formation of the country of Canada. However McGee was left off the delegation that would deliver the document of confederation to London. This development led McGee to run for a seat in political office, a seat to which McGee was attacked from a function of Ireland nationalists known as the Fenians. Thomas D’Arcy McGee won his seat however in April 7th 1868 McGee was murdered at the hands of a man named Patrick Whelan. Whelan was convicted of the crime and hanged as a result. McGee is one of Canadian history’s great public speakers, as there are several instances throughout his life where he swayed an audience with his speaking ability. Thomas D’Arcy McGee was an important figure in history and in the formation of the Country of Canada who tragically met his demise at the hands of a political assassination.

Bibliography

Powell, James. “The Hanging of Patrick Whelan.” Today in Ottawas History. August 22, 2014. Accessed November 28, 2018. https://todayinottawashistory.wordpress.com/2014/08/22/ the-last-drop/.

Archives Canada. “Thomas D’Arcy McGee (April 13, 1825 – April 7, 1868).” Library and Archives Canada. April 22, 2016. Accessed November 28, 2018. https://www.bac- lac.gc.ca/eng/discover/politics-government/canadian-confederation/Pages/thomas-darcy- mcgee.aspx.

Block, Niko, and Robin Burns. “Thomas D’Arcy McGee.” The Canadian Encyclopedia. April 22, 2013. Accessed November 28, 2018. https://www.thecanadianencyclopedia.ca/en/article/ thomas-darcy-mcgee.

Burns, Robin B. “Biography – McGEE, THOMAS D’ARCY – Volume IX (1861-1870) – Dictionary of Canadian Biography.” Home – Dictionary of Canadian Biography. 1976. Accessed November 28, 2018. http://www.biographi.ca/en/bio/ mcgee_thomas_d_arcy_9E.html.

Bélanger, Claude. “Quebec History.” Economic History of Canada – Canadian Economic History. January 2005. Accessed November 28, 2018. http://faculty.marianopolis.edu/c.belanger/ QuebecHistory/encyclopedia/ThomasDArcyMcGee-HistoryofCanada.htm.

Reddy, Fran. “The Fenians & Thomas D’Arcy McGee: Irish Influence in Canadian Confederation.” The Wild Geese. June 30, 2014. Accessed November 29, 2018. http:// thewildgeese.irish/profiles/blogs/the-fenians-thomas-d-arcy-mcgee-irish-influence-in- canadian.

Canada, Archives. “Common Menu Bar Links.” ARCHIVED – Daily Life: Shelter – Inuit – Explore the Communities – The Kids’ Site of Canadian Settlement – Library and Archives Canada. May 02, 2005. Accessed November 28, 2018. https:// www.collectionscanada.gc.ca/confederation/023001-4000.52-e.html.

Senior, Hereward. The Fenians and Canada. Toronto Ontario: The Macmillan Company of Canada Limited, 1978.

Macnab, Charles. Understanding the Thomas D’Arcy Mcgee Assassination: A Legal and Historical analysis. Ottawa Ontario: Stonecrusher Press, 2013.

Brady, Alexander. Thomas D’arcy Mcgee. Toronto Ontario: The Macmillan Company of Canada Limited, 1925.

Slattery, T.P. The Assassination of D’Arcy Mcgee. Garden City, New York: Doubleday & Company, Inc., 1968.

Wilson, David A. Thomas D’Arcy Mcgee: Volume 1. Passion, Reason, and Politics. 1825-1857. Montreal Quebec: Mcgill-Queen’s University Press, 2008.

Wilson, David A. Thomas D’Arcy Mcgee: Volume II. The Extreme Moderate. 1857-1868. Montreal Quebec: Mcgill- Queen’s University Press, 2011.

Slattery, T.P. They Got To Find Me Guilty Yet. Garden City, New York: Doubleday & Company Inc., 1972.

2018-11-30-1543538374

Hadrian’s works: online essay help

Architecture that has withstood the test of time gives us an insight into the culture and values of civilizations from the past. Ancient Roman architecture is widely known to be some of the most suggestive and prominent works because the Emperors who ruled used building designs to convey their strength and enrich the pride of their people. Hadrian was not a man of war like the emperors who preceded him. Instead, he dedicated his time to fortifying his nation’s infrastructure and politicking his way into the hearts of provinces far beyond the walls of Rome. I fell in love with the story of Hadrian for two reasons: his architectural contributions have withstood the test of time, and even though he is so well studied there is so much about his life we do not know. This research paper will zero in on the life of Roman Emperor Hadrian and how his upbringing and experiences influenced his architectural works. Hadrian struggled during his reign as well as within his own mind due to his enthusiasm for Classical Greek culture that was fused with the Roman pride his mentors had instilled in him. A description and discussion of Hadrian’s architectural works that I have found most interesting will illustrate this fusion even more.

Publius Aelius Hadrianus was born in Italica, Spain on the 24th of January, year 76 A.D. He was born to a family that was proud to be one of the original Roman colonists in the province that was considered to be one of Rome’s prized possessions. The land offered gold, silver, and olive oil of higher quality than that of Italy. Additionally, Hadrian was born during a period where Italica dominated the Roman literacy scene. The city also boasted being the birthplace of Hadrian’s predecessor, mentor, and guardian Trajan. Hadrian’s upbringing in Italica gave him a very unique perspective on Rome’s ruling of expansive territory as well as the artistic and intellectual qualities of Roman tradition. While growing up his “gaze would fall upon statues of Alexander, of the great Augustus, and on other works of art, which…were all of the highest quality.” He developed a sense of pride for being Roman, and this would translate into his future actions as emperor and architect.

Hadrian was strong in both mind and body. He was built tall and handsome, and kept in shape through his love for hunting. In the words of H.A.L Fisher, Hadrian was also “the universal genius.” He was a poet, singer, sculptor, and lover of the classics so he became known by many of his peers as a Greekling. The synergy between Greek and Roman ideals within Hadrian made him able to approach his nation’s opportunities and struggles from multiple angles, which is also why he would become such a successful emperor. By the time he came to power “Hadrian had seen more of the Roman dominion than any former emperor had done at the time of his accession. He knew not only Spain, but France and Germany, the Danube lands, Asia Minor, the Levant and Mesopotamia, and thus had a personal acquaintance with the imperial patrimony that no one else in Rome could rival.”

During Hadrian’s reign as emperor, he aligned himself with a military policy that was controversial at the time, but inspired by his upbringing in the province of Italica. He believed that the provinces should be guarded by a locally recruited military, while his Roman legions would stay in a single region for decades. The personal interest of provincial residents to protect themselves was his goal. The only Roman descendants that would aid in the protection of provinces were part of the corps d’elite – the best of the best – and would be sent only to train the recruited military-men. During his reign, however, Hadrian experienced a loss of two full legions. The thinning of his military meant he would rely heavily on recruited provincial men as well as physical barriers. One of which – his most famous – was located in Britain: Hadrian’s Wall.

Hadrian’s arrival in Britain was a spark that ignited a fire of progress and development. During the second century, much of London was destroyed by fire, and when the country was rebuilt to an area of about 325 acres, it became Rome’s largest northern territory by a long shot. Britons have historically always valued the countryside more than city life, as evident by their plain cities and attractive gardens, and for this reason many of the other cities that were rebuilt by the Romans ended up reducing in size, rather than expanding. The inhabitants simply wanted to live in the beauty of nature, and moved out of their towns exponentially as the countryside was developed. The most significant and long lasting accomplishment during the time that Rome rebuilt its English territory was the design and completion of Hadrian’s Wall.

Hadrian foresaw a symbiotic relationship that he and the British territory could share. It was based on his need for man power, which Britain had plenty to loan out. In return, Hadrian would fortify the territory and protect it from the northern savages. His past militarized protection experiences usually presented him with an expansive section of land to keep account of; but since Britain was surrounded by water in most directions his first inspiration was to build a wall. Looking back on his struggle at the Rhine-Danube region, Hadrian knew that if a military force were to be compromised a stronghold built for retreat would only lead them to death. His strategic mind led him to believe that mobility was crucial in remaining tactically offensive, so a system of fortifications spread out to increase the area of control and communication was his ideal option.

Hadrian’s Wall began near the River Tyne and stretched all the way to the Solway. It wasn’t meant to be manned at every point along its length, but rather act as a system that would drive the traffic of his enemies. “Because its course was plotted from one natural advantage to the next, the wall seems to have chosen the most difficult route across the English countryside.” It climbs to steep crags and clings to dangerous ridges. Enemy forces would not only deal with a man-made wall in their path, but in many cases they found themselves faced with natural structures that made traversing the wall even more difficult; not to mention the ditch on the north side of the wall that was twenty seven feet wide and nine feet deep. “The gateways allowed the passage of troops for operations to the north and were points where civilian traffic between north and south could be controlled.” The wall was intended to be made of mortared masonry up until the River Irving, where limestone was no longer available locally. The wall continued on made of turf. Gates were built along the wall roughly every one Roman Mile (1.5 km). Behind each gate was a reinforced guard tower that would house the patrol.

Another one of the reasons Hadrian’s construction of the wall is such an astonishing feat is because the entire project was done by hand. Roman legionaries would spend time completing a pre-specified length of the wall, and then allow the next legion to come along and continue where they left off. Unlike most Roman architecture, the stones used to build the wall were small, about eight inches in width and nine inches in length. Historians attribute the use of small stones to the work that was required to get them to the wall. Every stone would have to be carried by the backs of men or animals, and cross a distance of eight miles all the way from a quarry in Cumberland. Then, without the aid of pulleys or ropes, legionaries would place each stone one by one.

As time went on, the wall was rebuilt and fortified by Hadrian’s successors and became a permanent fixture in the British provincial landscape; far more than just a military structure. Romanesque townships were built along the wall situated near the guard forts. The townships would be fully equipped with bath houses, temples, and even full marketplaces.

In the modern world, we do not see Hadrian’s Wall as it was during the height of Roman rule, though it is clear that influential proprietors of the wall overtime tried their best to maintain the “symbolism and materiality of the Roman remains.” The years of the walls existence have allowed man and weather to tear down the wall so that its stones could be used to build churches, roads and farmhouses. Experienced architects have worked to rebuild the wall overtime and John Clayton is responsible for one of the most significant rebuilds. He purchased a long stretch of farms along the central portion of the wall, and used the original stones that had fallen over time to reconstruct it. Clayton also moved many of the inhabitants and communities that were built near the wall to locations further away so as to increase the walls visibility.

It is refreshing to know though that modern day Roman enthusiasts can see a virtually untouched portion of the wall between Chollerford and Greenhead known as “Britain’s Wall Country.” It is “an unspoiled region of open fields, moors and lakes in the country of Northumberland.” Chesters, a city about half mile west of Chollerford is home to one of the best excavated wall forts. It touts remains of towers, gates, steam rooms, cold baths, the commandant’s house, and chambers where soldiers relaxed. The most well preserved wall fort in all of Europe to date is located at Housesteads in the same region. The fort is in the shape of a rectangle with rounded edges, and “along its grid of streets are foundations marking the commandant’s house, administrative buildings, workshops, granaries, barracks, hospitals” and more. One of the most Romanesque features of the fort is the presence of latrines; complete with wooden seats, running water, and a flushing system to carry waste away. Britain would not see these luxuries again until the 19th century as Roman standards were not equaled again until that time. Modern museums along the wall feature many artifacts from the original dwellers and attract tourists from all around the world.

At the ripe age of ten years old, Hadrian’s father passed away. Ancient documentation lends us virtually no details about his mother, but a father figure would have been the most important in Hadrian’s upbringing. Fortunately for him, he had two men that would play that role in his life. The first was Acilius Attianus, with whom Hadrian would spend the next five years with and have his first introduction to the capital city. Attianus also introduced Hadrian to his first formal education. He would return home to Italica for a year or two only to be summoned back by his other guardian Trajan.

In order to truly understand the character and reasoning behind more of Hadrian’s architectural works, one must look closely at the influence his cousin, mentor and guardian, Trajan, had on him. From an objective point of view, Trajan paved the way for Hadrian by becoming the first emperor to ever be born outside of Italy, and proved to the people of Rome that “loyalty and ability were of more importance than birth.” Trajan also moved young Hadrian from place to place whenever he saw his perspective become too narrow or close-minded.

At the age of forty and prior to becoming an emperor, Trajan developed relationships with men like Domitian and his predecessor Nerva. The latter would eventually adopt him as his own heir. His status allowed him to usher Hadrian into political positions that would give him the opportunity to interact with powerful people and make a positive impression. Trajan led both Hadrian and Rome into the light as a positive example. Moderation and Justice were at the forefront of all of his decision making, and is exemplified in his declaring that all honest men were not to be put to death or disfranchised without trial. Trajan brought Hadrian along with him to fight the Dacian wars, and it is here that Hadrian learned how the Roman army was organized and led. He witnessed Trajan tearing “up his own clothes to supply dressing for the wounded when the supply of bandages ran out.” During the outbreak of the second Dacian war, he granted Hadrian the gift of serving as commanding officer. After Hadrian proved his worth to his uncle and Rome, Trajan granted him a gift of even more importance – a diamond ring originally owned by Trajan’s predecessor Nerva, and symbolized the fact that Hadrian would absolutely be his successor.

At the age of forty two, Hadrian for the first time showed Rome that he was an innovator and a man who lived by the beat of his own drum: he wore a beard. In the later days of the Roman Republic, beards had gone out of style. In fact, no emperor prior to Hadrian had worn a beard. Some historians credit his beard to wanting to look like a philosopher, while others think he did so to hide a scar running from his chin to the left corner of his mouth. The real reason is that Hadrian realized there was no point in carrying on with the custom without reason. During his lifetime, shaving was practically torture for men, because they had no access to soap or to steel. Hadrian’s reintroduction of the beard among Roman’s would also foreshadow his eventual distaste in all things Roman.

Hadrian adopted Trajan’s sense of modesty and moderation. He did not except titles bestowed upon him immediately, and would only accept it for himself when he had felt he truly earned it. One of the best examples of this is demonstrated by the titles he chose for himself to be printed on Roman currency during his reign. Historical records from the period that document Hadrian’s reign would incorporate each and every one of the titles that he was ever given. “But on the emperor’s own coins the full official titulature occurs only in the first year. After that, first imperator was dropped, then even Caesar. Up to the year 123, he is pontifex maximus…holder of the tribunician power…For the next five years his coins proclaim him simply as Hadrianus Augustus.”

As if paying homage to Augustus, the founder of the empire and title that he had come to honor, Hadrian set off to see that the infrastructure of his roman state was intact and fortified under his direction. After five years of travel to improve the cities of Corinth, Manteca, and Sicily, Hadrian returned to Rome. He had laid down excellent groundwork for his governmental policy, so he finally had time to improve the infrastructure in his nation’s capital. He would soon realize his visions for structures like the Temple of Venus, and his most significant architectural accomplishment of all: The Pantheon.

Rome’s Pantheon was originally built by Marcus Vipsanius Agrippa. Destroyed by fire during Nero’s reign in year 80, Hadrian had it completely redesigned and reconstructed. “The very character of the Pantheon suggests that Hadrian himself was its architect…an impassioned admirer of Greek culture and art and daring innovator in the field of Roman architecture, could have conceived this union of a great pedimental porch in the Greek manner and of a vast circular hall, a masterpiece of architecture typically Roman in its treatment of curvilinear space, and roofed with the largest dome ever seen.” In lieu of his inherent modesty, he decided not to even put his own name on the façade of the building. Instead he would give credit to the original designer, by inscribing it with M. Agrippa. Though there is no hard proof that Hadrian was its only designer, it is only reasonable to believe that his mind, infused with Roman and Greek culture, could conjure its design – that which is one of the most renowned structural feats in human history. The most significant difference between typical Roman and Greek architecture was the importance of height. Romans believed in reaching for the heavens with their architecture. The bigger and more grandiose a building or monument was the better.

It is unusual that we do not find much ancient documentation on the building despite its historical importance. In fact, the only written report from the time is from Dio Cassius who thought the building was constructed by its original designer, M. Agrippa. He referred to the building as a temple of many gods. “A rectangular forecourt to the north provided the traditional approach, its long colonnades making the brick rotunda, so conspicuous today, appear less dominating; a tall, octastyle pedimented porch on a high podium with marble steps also created the impression of a traditional Roman temple.” The building’s southern exposure would reveal to an onlooker the Baths of Agrippa, to the east lie the Saepta Julia, and to the west the Baths of Nero.

The Pantheon is basically composed of a columned porch and cylindrical space, called a cella, covered by a dome. Some would argue that the cella is the most essential aspect of the Pantheon, while the porch is only present in order to give the building a façade. “Between these is a transitional rectangular structure, which contains a pair of large niches flanking the bronze doors. These niches probably housed the statues of Augustus and Agrippa and provided a pious and political association with the original Pantheon.” Once inside the dome, a worshipper would find himself in a magnificently large space illuminated only by a large oculus centered on the ceiling. The walls of the chamber are punctuated with eight deep recesses alternating between semicircular and rectangular in shape. At the south end of the interior is the most elaborate recess complete with a barrel-vaulted entrance. “The six simple recesses are screened off from the chamber by pairs of marble columns, while aediculae (small pedimented shrines) raised on tall podia project in front of the curving wall between the recesses.” Encircling the entire room just above the recesses is an elaborate classically styled entablature. The upper portion of the dome was decorated as well, but what remains is mostly from an 18th century restoration. “The original decoration of the upper zone was a row of closely spaced, thin porphyry pilasters on a continuous white marble plinth.” The dome floor is decorated in a checkerboard pattern of squares and circles within squares. The tiles are made of porphyry, marbles, and granites while the circles are made of gilt bronze.

The pantheon was built almost entirely of concrete, save the porch which was also constructed of marble. From the outside of the domed section, it would appear to an onlooker to be made out of brick, but this is not the case. The bricks in this section are only a veneer, or thin decorative layer. Simple lime mortar that was popular during the period was made by combining sand, quicklime, and water. When the water evaporated, the concrete was set. Roman concrete used in the construction of the Pantheon, called pozzolana, acted quite like modern Portland cement and would set even when the mixture was still wet. Hadrian designed the Pantheon’s domed top to be 43.3 meters in diameter, which is also the exact height of the interior room. A cross section of the rotunda would reveal that it was based off of the dimensions of a perfect circle, and that is what makes the interior space seem so majestic. The sheer size of the dome was never replicated or surpassed until the adoption of steel and other modern reinforcements. What made Hadrian’s dome possible though was his use of concentric rings laid down one after the other over a wooden framework to create the basic shape of the dome during construction. The rings would apply pressure to one another, thus stabilizing the structure. The lower portion of the dome was thick and made of heavy concrete and bricks, while the upper portion was built thin and utilized pumice to make it lightweight.

The exact purpose of the front porch is unknown, and as mentioned before, may have only been added in order to give the building a façade. “It consists of a pedimented roof, supported by no less than sixteen monolithic columns, eight of grey Egyptian granite across the front, three on either flank, and two behind them on each side.” By adding this colonnade Hadrian had proven that he saw past what man had originally used it’s temples for. Traditionally, the temple cella would never be entered by the public, and so architects would hone and focus their craft on the exterior elements of the temple. Hadrian had effectively anticipated the Christian church by several centuries in the design of his “House of Many Gods.”

The Pantheon embodies everything Hadrian was as a person during the early portion of his ruling. It was very much a fusion of Greek and Roman principles that mirrored Hadrian’s inner character. He shared grand Roman pride with the people he served, and they would forever see the Pantheon as a symbol of that pride. However, as Hadrian matured as a ruler, saw more of the world, and returned to Rome for short periods at a time, there was a monumental shift in his opinions of his own capital.

Not unlike Trajan, there was another man who played an integral role in Hadrian’s life. His name was Antinous, and although not many specifics are known about his life and relationship with Hadrian, we do know that he was from Bythnia. The two met there when critics believe Antinous was the age of eighteen. “To say that he was ”like a son” to Hadrian is to put a charitable slant on their rapport. It was customary for a Roman emperor to assume the airs, if not the divine status, of the Olympian god Jupiter.” Though it was never explicitly stated or denied, it is widely believed that Hadrian and Antinous were more than just friends, but lovers.

As a part of Hadrian’s entourage, Antinous naturally went on all of the quests that led him to see the world. It was on one of these expeditions along the Nile River that Antinous lost his life, and forever plagued the mind of the now devastated emperor. Some say Antinous was murdered by his ship mates, while others even speculate that Hadrian may have sacrificed him in a testament to Egyptian mystery cults that involved Antinous’ sacrifice as a way for Hadrian to gain immortality. Nevertheless, Hadrian went on to express his admiration for the boy to the world at large. He ordered the production of his image in full scale statues, busts, and miniature printings on coins and other various items. “Full lips, slightly pouting; a fetching cascade of curls around his soft yet squared-off face; somewhat pigeon-breasted, but winningly athletic, his backside making an S-curve that begs to be stroked… one could rhapsodise further, but it is more telling to stress the sheer quantity of production.”

The most fascinated reason I have come to discover about Hadrian’s mass production of Antinous’ image is that of classical religious revival. “Hadrian knew about the Christians, whom he regarded as harmless idiots; he waged war against the Jews, who challenged his authority.” He presented Antinous as Dionysos, Pan, and as a second Apollo. Each of these disguises are intricately portrayed on images of Antinous in order to reinstall his personal views to the people he ruled.

Today the image of Antinous has survived even in Western culture. What we perceive as beauty in both men and women has been absolute for millennia; symmetrical features and calibrated proportions that Antinous embodied so thoroughly. Across other world cultures, the same holds true. Even populations completely secluded from the western world will perceive beauty as we do. As one inspects the image of Antinous methodically, they can only deduce that Hadrian was a man of fine taste.

After a stint in Africa, Hadrian returned to Rome for a short period of time, but felt as if he belonged there no more. “In Rome he hated the court etiquette, at the same time as he insisted on it: the wearing of the toga, the formal greetings, the ceremonies, the endless pressure of business.” So he left for Athens, and felt at home there. His distaste for the capital of his country foreshadowed his political decline and eventual downfall, but his positive contributions to the Roman society and historical architecture were far from over.

While in Athens, Hadrian had the opportunity to express his inner Greekling in a manner that was stronger than ever. He could talk the talk and walk the walk so well in Athens that he undertook the last round of initiation at Eleusis. The Panhellenic council offered him a place to continue leading the people who so dearly looked up to him. Though the Panhellenic council did not have formal political power, it unified the public because it was the only society that could grant a new territory to be truly Hellenic. While serving the council, and being referred to as Panhellenios, Hadrian was constantly immersing himself in the local culture, and enjoyed watching the best Athletes in Greece perform at the Panhellenic games. The Athenians even granted Hadrian the title of “Olympian.”

At this point in Hadrian’s reign, he seems to forget the lessons of moderation and justice taught to him by Trajan. He was once an emperor reluctant to accept praise from his people, but in Athens he did just the opposite. He designed and ordered the building of a new city called Hadrianopolis. As a testament to his distaste of Rome, a statue was erected of Hadrian at Olympia. The statue adorns a lorica, or breastplate that is engraved with symbols that depict the character of one who wears it. “Hadrian’s lorica shows Athene, flanked by her owl and her snake, being crowned by two graces, and standing atop the Wolf of Rome which suckles Romulus and Remus.” Clearly Hadrian believed deep down that Athens was a city superior to Rome, and the sight of the statue would surely leave a bitter taste in the mouth of any Roman who traveled to Olympia and gazed upon it.

Even after all Hadrian had done for the welfare and protection of Rome, he failed his people in one great aspect. He began his rule as an outsider, and remained so because he spent so little time in the capital city. Near the end of his days, the tension was amounting to a great amount of stress. So much so, that he became a tyrant. Hadrian would not have mercy on anyone who stepped on his toes as their leader. On one hand, the senate understood that he had outsmarted them, and the Italian members were fully aware that they were outnumbered by provincially born citizens. They had additional reasoning to dislike him because he had intentionally expensed Roman resources in order to benefit the provinces he would visit. On the other hand, “he had given them a fine new city, purged of old abuses, enriched and embellished with magnificent buildings…He had given them cleaner airier houses.” In the eyes of the Romans though, Hadrian had crossed a line. It was no secret that he had come to shy away from Rome, and that he preferred Athens. Fortunately for him, he had seen this end to his reign coming. Eight years prior, he began building a Villa in Tivoli, the classic Tibur, so that he would be able to spend the end of his days in his own version of paradise.

The most extensive architectural work of Hadrian’s is without a doubt his Villa at Rome. His villa was built at the base of Tivoli on a plain about 18 miles from Rome. Critics argue as to why Hadrian chose this spot for his Villa. He had an entire empire to choose from, and places like the Town of Tivoli offered fantastic views as well as better weather. Though Hadrian’s choice of location is criticized from a picturesque point of view, he chose it for more logical reasons. For one, he built his villa on the healthiest spot of land he could find – located on the breezy lowlands of the Apennines, within reach of wind from the west, and protected by hills. The plane was naturally unleveled, but the architect made it so by excavating obstacles in some places and paving others. All eight to ten square miles were eventually completely level, partially natural and partial of poured masonry. Another reason why Hadrian may have chosen the location is because the land belonged to his wife Sabina – albeit she played a very negligible part in his life. For all logical reasons, Hadrian chose the spot because he would be so easily able to make the land into anything he wanted with little effort.

Not unlike Versailles, Hadrian’s Villa imposes a formal order through a system of axes, so that the nature is dominated by geometry. The architecture is composed of spaces that are both closed and unclosed. The entire site was built on and around the north, west and south sides of a giant mound. In some cases, it cut well below ground level. A large multistoried wall superseded the mound, and contained cubicles that would house guards and slaves. As it has been well-established, Hadrian’s architectural mind drew from both Greek and Roman styles; it seems as though his villa is also illustrating a fusion of organic and man-made principles. “At Tivoli, it occurs, as it does perhaps even more powerfully on the arcades which form the face of the Palatine hill above the roman forum, that the scale of natural formations and of man-made structures coincides, so that the hills become in a sense man-made, and the structures take on the quality of natural formation.” For the representation of Canopus – a recreation of a resort near Alexandria – Hadrian designed a system of subterranean passages within a ravine to symbolize the River Styx. Hadrian truly felt that he had the control of the word in his hands, and felt no bound for what his works could be or represent.

A modern tourist would enter the villa through an area in the north moving toward the Poikele, yet Hadrian had intended his visitors to enter from an area between the Canopus and the Poikele so as to force them to walk under the huge mound walls filled with servants. The entrance into the Villa illustrates Hadrian’s juxtaposition of circles and squares that would be a recurring geometric theme in the rest of its architecture and layout. Canopus lies to the right of the entrance, with the Poikele to the left, and further on, two baths were in view. Although a further descriptive tour would help immensely in painting the picture of Hadrian’s Villa to the reader, it would take far too many words, so I am going to focus on only a few of the features that I find fascinating about the structure.

There is a space in Hadrian’s Villa known as a cryptoporticus. At its center there was a raised pool, about the size and shape of an average American swimming pool. Because the pool was raised, it seemed to hang in the middle of the court, while the double portico that surrounded it gave the structure a heavier feel. The Hall of Doric Pillars to its side are neither Roman or Greek in design, and feel as though Hadrian was experimenting with an architectural style all his own. The large field about the top of the hill is perfectly level up to the point where it drops off, and is supported by the Hundred Chambers before a vast valley. It is rectangular in shape with concave ends, and once again we find a pool at its center. Around that, what used to be a hippodrome, has been recreated as a garden.

The sculptures found at Hadrian’s Villa are so numerous that it is nearly impossible to study ancient sculpture without mention of the monument. Hadrian furnished his villa with not only all of the luxuries that Rome had to offer, but all of the best artwork. Egyptian figures and sculptures of his friends and family have been found in the ruins of his villa. Since each new excavation of the grounds reveals new artifacts, museums around the world have its works on display. Two statues of Antinous have been found in the ruins. One was created clearly by Greek design, while the other emanates Egyptian symbolism. Hadrian also had a curiosity for portraits, thus many were found in his ruins as well. He even went on to change Roman law and popularized self-portraits within the homes of Roman nobility and upper-class.

My overall goal with this paper was to dive headfirst into Hadrian’s life, and hopefully see why he built the things he did. Personally, seeing Rome through the eyes of Hadrian has given me a newfound appreciation for what inspires architects to design the things they do. All of Hadrian’s works mentioned in this document divulge both his inner and outer struggles as emperor, and more importantly have influenced the decisions of all architects beyond his time. Just like the emperors before him, Hadrian’s architecture made a statement about Roman strength and their everlasting objective to emulate their Gods. Hadrian’s title set him at the head of the Roman military, and his strategy and tactical senses were put forth by his design of his wall in Britain. He was not an emperor set on conquering as much land as possible, but of fortifying the land he already ruled over. I set out to illustrate two sides of Hadrian that were prominent in his works – his love for classical Greek culture and the Roman pride he was brought up with. We saw these two aspects outlined in his designs of the Pantheon and Hadrian’s Villa. The two designs also outline how at the beginning of his reign and directly after the influence of Trajan, Hadrian was still true to his Roman origin. By the end of his term, Hadrian had almost completely disregarded the culture of his capital city, and he fully embraced his Hellenistic tendencies. Hadrian’s Pantheon and Villa compare and contrast his Greco-Roman outlook within their own designs. What captivates me even more about Hadrian is that there are still so many mysteries about his life to uncover. Fortunately for us, he left behind artifacts and even entire monuments for us to interpret and imagine what life in ancient Rome would have been like.

2016-10-20-1476968295

What were Prisoner of War camps like during the Civil War?

What were Prisoner of War camps like during the Civil War, what were the conditions and how did it effect the prisoners?

During the Civil War Prisoner of war camps were used when enemy soldiers were captured outside of their territory; those camps were overcrowded, disease ridden, and in terrible conditions. The statistics behind the prisoner of war camps have been concluded by multiple sources and records. In the four years of the Civil War more than 150 POW camps were established in the North and South combined (“Prisons”). That number of camps may seem large, but it clearly was not enough considering the issues concerning overcrowding. Though the exact number of deaths is not certain, records state 347,000 men died in camps total, 127,000 from the Union, and 220,000 from the Confederacy (“Prisons”). Of the men that died in the Civil War more than half of them were prisoners of war. When comparing the camps to war they should not have been so similar, Men in camps were usually left to die. They suffered from mental trauma and health complications the same if not worse than soldiers fighting the war. An example of prisons valuing extraneous items over the prisoners. From the years 1862-1865 Belle Isle held prisoners in Virginia under terrible conditions according to poet Walt Whitman. The prisoners endured the biting cold, filth, hunger, loss of hope, and despair (“Civil…Prison”). Belle Isle had an iron factory and hospital on the island, yet barracks were never built (Zombek). The prisoners only had small tents to protect them from the elements. The lack of shelter shows the priorities of the prisoners’ needs whilst having a hospital and iron factory. As an open-air stockade escaping Belle Isle was increasingly difficult (Zombek). The total disregard when it comes to the prisoner’s safety and protection from elements when it comes to Elmira is ridiculous. In July of 1864 Elmira prison was opened. Elmira was known for the terrible death rate of 25% and for holding 12,123 men Bailey when the regulated capacity was 4,000 (“Civil…) Prison”). The urgent need for medical supplies was ignored by the capital (“Elmira”). When winter came Elmira the prisoner’s clothing was taken and when Southerners were sent things, they would burn it if it were not grey (“Elmira”). The mistreatment of prisoners was intentional at Elmira as well as other prisons. After taking a glimpse at some prisons and the overall statistics of camps the following is still quite shocking. Andersonville was a Confederate prisoner of war camp, it is painted out to be the worst one in history.

Prisoners at Andersonville were so malnourished they looked like walking bones. They began to lose hope and turned to their lord. In Andersonville, the shelter, or lack thereof, was another issue. Prisoners had to use twigs and blankets due to inflation in lumber prices (“Civil…) Deadliest”). This represents how every material’s price adds up and contributes to the conditions. Within 14 months of 13,000 of the 45,000 prisoners died. The prison was low on Beef, cornmeal, and bacon rations meaning the prisoners lacked vitamin C therefore, most got scurvy (“Civil…) Prison”). With the guards turning a blind eye, prisoners had to fend for themselves. Some took this lack of authority to far and those were the “Andersonville Raiders.” They stole food, attacked their equals, and stole waves from their shelters (Serena). Andersonville especially made people turn violent and caused them to lose faith in humanity. A 15-foot-high stockade guarded the camp though the true threat was a line. 19 feet within the stockade there was a line, to keep prisoners away from the walls. If a prisoner were caught crossing the line they would be shot and killed (Serena). This technique was honestly unnecessary and a waste of resources. First the conditions now, the location of Andersonville. A swamp ran through the camp, with little access to running water or toilets prisoners used the swamps. This polluted the water, making it even more non-consumable (Serena). In the process of building Andersonville Prison, slave labor was implemented to build the stockade and trenches (Davis). The camps abused their Bailey power to not only harm prisoners but to use slaves. From swelling numbers of prisoners, they started having trouble finding space to sleep (Davis). With the capacity increasing and disgusting conditions it was a funhouse for disease. Andersonville was assumed to be the optimal position for a POW camp because of the food, the only problem was farmers did not wish to sell crops to the Confederacy (“Myths”). This is just another example of how Andersonville would have been better if given more assistance.

Was any justice ever served for the men who ran the camp.

James Duncan and Henry Wirz were both officers of Andersonville after the prison closed, they were both charged with war crimes (Davis). Wirz’s two-month trial started in August 1865. The trial included 160 witnesses, Wirz did not show a distaste towards prisoners. He served as a scapegoat for many of the allegations, he was charged with harming the lives and health of Union soldier’s and murder (“Henry”). Henry Wirz was a witness to all the mistreatment in Andersonville as a commander, therefore making him liable for the thousands of prisoners who died. Wirz was then executed (“Henry”). Unlike Wirz, Duncan was lucky, after a two-and-a-month trial he was sentenced to 15 years. After spending a year at Fort Pulaski Duncan escaped (Davis). Duncan was never truly punished for his actions because he escaped after so much time.With the logistics behind Andersonville its to realize and understand the arguments of each the Union and the Confederacy. Why were prisoners treated so poorly when the neck supplies for such was provided. The North had access to a surplus of medical supplies, food supplies, and other resources meaning they could have treated the prisoners better (Prisons). They had no reasoning besides wanting to save resources and torture Confederate soldiers. In the North they just sat around and made the soldiers live shelter less and lack protection from the elements (Macreverie). In opposition to the North, the South did not intend to have such poor conditions. For example, in Andersonville the prisoners and guards were both fed the same rations (Macreverie). The Bailey South struggled more with food compared to the North. Those tending the fields did not have shoes and only a handful of cornmeal or a few peanuts (“Prisons”). The prisoners were not fed due to a lack of preparation. Both sides tried to simplify the reasons for neglect in camps to shortage in food supply and seeking vengeance. Both sides ran the camps differently, but they faced the same problem, shortages of supplies (“Myth”). The south inevitably tried their best though their best was not good enough, the North had the luxury to a choice in how they treated prisoners and they chose the wrong one. During the Civil War Prisoner of war camps were used when enemy soldiers were captured outside of their territory; those camps were overcrowded, disease ridden, and in terrible conditions. It’s safe to say Andersonville was a memorable prison but for all the wrong reasons. The arguments weren’t the best for either side when it came to justifying their actions. Generally statistics involving the camps were definitely interesting and honestly very shocking. All in all prisoner of war camps were unsafe and had terrible conditions but they served the purpose of capturing soldiers from the opposing side during war.

2021-11-17-1637162551

Formation of Magmatic-Hydrothermal Ore Deposits

Introduction:

Magmatic- hydrothermal ore deposits provide the main source for the formation of many trace elements such as Cu, Ag, Au, Sn, Mo, and W. These elements are formed in a tectonic setting, by fluid dominated magmatic intrusions in Earth’s upper crust, along convergent plate margins where volcanic arcs are created. Vapor and hypersaline liquid are the two forms of magmatic fluid important to the ore deposits. The term ‘fluid’ as it is being referred is non-silicate, aqueous liquid or vapor; Hypersaline liquid is also known as brine and is indicative of a salinity of >50wt%. The salinities in magmatic environments that can form ore deposits have a substantial range from a very low .2-.5wt% to the hypersaline >50wt%. The salinity of a fluid was thought to be one of the main contributing factors to which elements formed under certain specific conditions however, recent developments support a new theory that is discussed later. There are multiple different types of ore deposits such as skarn, epithermal (high and low sulphidation), porphyry and pluton-related veins. However, there are two different ore deposits, porphyry and epithermal, that produce the greatest abundance of trace elements around the world (Hedenquist and Lowenstern, 1994).

Porphyries, one type of ore deposit which occurs adjacent to or hosted by intrusions, typically develop in hypersaline fluid and are associated with Cu” Mo” Au, Mo, W or Sn. Another type of ore deposit which occurs either above the parent intrusion or distant from the magmatic source is known as epithermal and relates to Au-Cu, Ag-Pb, Au (Ag, Pb-Zn). The term epithermal rightfully refers to ore deposits formed at low temperatures of <300”C and at shallow depths of 1-2km (Hedenquist and Lowenstern, 1994). The epithermal ore deposits can be further separated in to two different types, the high sulfidation and the low sulfidation deposits which are shown in Figure 1. High sulfidation epithermal deposits form above the parent intrusion near the surface and from oxidized highly acidic fluids. These systems are rich in SO2- and HCl-rich vapor that gets absorbed in to the near surface waters causing argillic alteration (kaolinite, pyrophyllite, etc.). The highly acidic waters then get progressively neutralized by the host rock. Low sulfidation occurs near the surface as well but away from the source rock, as seen in Figure 1, and is dominated by meteoric waters. The fluids are reduced with a neutral pH and CO2+, H2S, and NaCl as the main fluid species. The main difference between the two epithermal fluids is how much they have equilibrated with their host rocks before ore deposition (White and Hedenquist, 1995). In addition to the two main types of ore forming deposits, there are certain environments where they are capable of occurring.

There are three important reoccurring ore-forming environments around the globe that produce these trace elements. The first of which is the deep crust where gold deposits form due to mixing and phase separation among the aquo-carbonic fluids. The second is the granite-related Sn-W veins which provide the interaction of hot magmatic vapor and hypersaline magmatic liquid with cool surface-derived meteoric water, as a widespread mechanism for ore mineral precipitation by fluid mixing in the upper crust. Third is the Porphyry-Epithermal Cu-Mo-Au systems resulting from the varying density and degree of miscibility of saline fluids between surface and magmatic conditions that propose the role of fluid phase separation in ore-metal fractionation and mineral precipitation (Heinrich et al., 2007).

Figure 1 (Hedenquist and Lowenstern, 1994)

PORPHYRY-EPITHERMAL Cu-Mo-Au

The formation of magmatic-hydrothermal ore deposits is a complicated but not so timely process that undergoes numerous phases. A general depiction can be seen in Figure 2 showing the different components involved in the system. Hydrothermal ore deposits are initiated by the ”generation of hydrous silicate magmas, followed by their crystallization, the separation of volatile-rich magmatic fluids, and finally, the precipitation of ore minerals in veins or replacement deposits.’ (Audetat, Gunther, and Heinrich, 1998). Porphyry magma chambers have been dated using individual zircon grains. Since the magma reservoirs in which porphyry deposits form occur in the upper crust, they are found to have a maximum life span of <1 Ma. The porphyry stocks struggle to remain ‘at the temperature of mineralization(>350”C) for more than even a few tens of thousands of years, even with massive heat advection by magmatic fluids.’ (Quadt et al., 2011) The hosted zircons analyzed contain significantly different ages that range over a span of millions of years indicating multiple pulses of porphyry emplacement and mineralization. Diffusive equilibrium occurs even faster than mineralization between magmatic fluids and altered rocks. Thermal constraints suggest that the porphyries and their constituent ore fluids underwent the ore-forming process in multiple spirts of as little as 100 yrs. each. The methods behind this are handled and discussed later (Quadt et al., 2011).

Figure 2. Illustration of ore-forming magmatic-hydrothermal system, emphasizing scale and transient nature of hybrid magma with variable mantle (black) and crustal (gray) components. Interacting processes operate at different time scales, depending on rate of melt generation in mantle, variable rate of heat loss controlled by ambient temperature gradients, and exsolution of hydrothermal ‘uids and their focused ‘ ow through vein network, where Cu, Au, or Mo are enriched 100 fold to 1000 fold compared to magmas and crustal rocks (combining Dilles, 1987; Hedenquist and Lowenstern, 1994; Hill et al., 2002; Richards, 2003). (Quadt et al., 2011)

Chemical and temperature gradients are important due to the selective dissolution and re-precipitation of minerals in rare elements to also form ore deposits. Most ore deposits form in the upper crust due to the advection of magma and hot fluids into cooler rocks, creating the rather steep temperature gradients. Temporary steep gradients in pressure, density, and miscibility in response to the brittle deformation of rocks to form vertical vein networks proves physical properties of miscible fluids to be of equal importance. H2O-CO2”-NaCl controls the composition of crustal fluids causing variations in the physical properties and in turn affecting the chemical stability of dissolved species. (Heinrich 2007)

Evidence from fluid inclusions suggests the interaction of multiple fluids in volcanic arcs through fluid mixing as well as fluid phase separation. These fluid inclusions can provide insight in to the substantial role the geothermal gradient plays in the formation of these ore deposits and why they only occur under certain environmental conditions. Salinity was thought to be a main contributing factor (primary control) to which elements were precipitated but it is now debated that vapor and sulfur play a key role, especially in terms of Cu-Au deposits. Supporting evidence suggests the likelihood of one bearing greater significance than the other so both are discussed and compared. The addition of sulfur causes Cu and Au to prefer the vapor phase. Figure 3 shows the vapor/liquid concentrations surpass 1 and allow the elements to more easily shift in to the vapor phase where they can then be transported.

Figure 3 (Left). Experimental data for the partitioning of a range of elements between NaCl-H2O-dominated vapor and hypersaline liquid, plotted as a function of the density ratio of the two phases coexisting at variable pressures (modified from Pokrovski et al. 2005; see also Liebscher 2007, Figs. 13, 14). As required by theory, the fractionation constant of all elements approaches 1 as the two phases become identical at the critical point for all conditions and bulk fluid compositions. Chloride-complexed elements, including Na, Fe, Zn but also Cu and Ag are enriched to similar degrees in the saline liquid, according to these experiments in S-free fluid systems. Hydroxy-complexed elements including As, Si, Sb, Sb and Au reach relatively higher concentrations in the vapor phase, but never exceed their concentration in the liquid (mvapor/mliquid < 1). Preliminary data by Pokrovski et al. (2006a,b) and Nagaseki and Hayashi (2006) show that the addition of sulfur as an additional complexing ligand increases the concentration ratios for Cu and Au in favor of the vapor (arrows); in near-neutral pH systems (short arrows) the increase is minor, but in acid and sulfur-rich fluids (long arrows) the fractionation constant reaches ~ 1 or more, explaining the fractionation of Cu and Au into the vapor phase as observed in natural fluid inclusions. (Heinrich 2007)

VAPOR AND HYPERSALINE LIQUIDS

The solubility of ore minerals increases as water vapor density increases with the transient pressure rise along the liquid-vapor equilibrium curve. The nature of this occurrence suggests ‘that increasing hydration of aqueous volatile species is a key chemical factor determining vapor transport of metals and other solute compounds.’ (Heinrich et al., 2007) The high salinity in hypersaline fluid systems allows for the vapor and liquid to coexist beyond waters’ super critical point. The increasing water vapor density accompanied by an increase in temperature leads to higher metal concentrations as an inherent result of increased solubility of the minerals in vapor. ‘[Observed] metal transport in volcanic fumaroles’ and even higher ore-metal concentrations in vapor inclusions from magmatic-hydrothermal ore deposits’ (Heinrich et al., 2007) has led to research in order to quantify vapor transportation. Fractionation is of key importance because all elements behave differently when it occurs to the coexisting vapor and the hypersaline liquid. Certain elements such as ”Cu, Au, As, and B partition into the low-density vapor phase while other ore metals including Fe, Zn, and Pb preferentially enter the hypersaline liquid.’ (Heinrich et al.,2007). This basically means that vapor is now known to contain higher concentrations of ore metals than any other known geological fluid.

SULFUR CONTRIBUTION TO VAPOR

‘Sulfur is a major component in volcanic fluids and magmatic-hydrothermal ores including porphyry-copper, skarn, and polymetallic vein deposits, where it is enriched to a greater degree than any of the ore metals themselves’ (Seo, Guillong, and Heinrich, 2009). Sulfur is necessary for the precipitation of sulfide minerals such as pyrite and anhydrite. Sulfide is an essential ligand in metal-transporting fluids to increase the solubility of Cu and Au. Introducing sulfur to Cu and Au in vapor phase can also cause them to be relatively volatile. Sulfur however changed the conditions in which Cu and Au enter the vapor phase, as seen in Figure 3, and shines light on why it’s possible for Cu and Au to partition in to low density magmatic vapor (Heinrich et al., 2007). Sulfur basically makes it easier for Cu and Au in particular to enter in to the vapor phase where they can then be more easily transported making sulfur a key to the high concentrations of ore in the vapor phase.

Methods and Results:

ZIRCON DATING using LA-ICP-MS and ID-TIMS Figure 4(Above, Left). Rock slab from Bajo de la Alumbrera, showing early andesite porphyry (P2, left part of picture and xenolith in lower right corner) that solidi’ed before becoming intensely veined and pervasively mineralized by hydrothermal magnetite + quartz with disseminated chalcopyrite and gold. After this ‘rst pulse of hydrothermal mineralization, a dacite porphyry intruded along an irregular subvertical contact (EP3, right part of picture), before both rocks were cut by second generation of quartz veins (diagonal toward lower right). (Quadt et al., 2011) Figure 5(Above, Right). A: Concordia diagram with isotope dilution’thermal ionization mass spectrometry (ID-TIMS) results from the ‘rst (red ellipses, P2) and second (blue ellipses, EP3) Cu-Au mineralizing porphyry of Bajo de la Alumbrera. B, C: For comparison, published laser ablation’inductively coupled plasma’mass spectrometry (LA-ICP-MS) analyses and their interpreted mean ages and uncertainties on the same age scale (replotted from Harris et al., 2004, 2008; LP3 is petrographically indistinguishable from EP3, but cuts also second phase of ore veins). All errors are ” 2”. MSWD’mean square of weighted deviates. (Quadt et al., 2011) ‘Porphyry Cu ”Mo ” Au deposits form by hydrothermal metal enrichment from fluids that immediately follow the emplacement of porphyritic stocks and dikes at 2-8km depth’ (Quadt et al., 2011) Samples were taken from two porphyry Cu-Au deposits, the first samples taken from Bajo de la Alumbrera, a volcanic complex located in northwestern Argentina. Uranium-Lead LA-ICP-MS, laser ablation- inductively coupled plasma- mass spectrometry, and ID-TIMS, isotope dilution- thermal ionization mass spectrometry, analyses were performed on zircons from the samples to conclude concordant ages of single crystals between two mineralizing porphyry intrusions. The LA-ICP-MS data was taken previously and is represented by Figure 5, B and C. ID-TMS analyzed samples from the two intrusions. One sample known as BLA-P2 is quartz-magnetite(-K-feldspar-biotite) altered P2 porphyry while the other sample, taken 5m from the EP3 contact to exclude contamination, is known as BLA-EP3. BLA-EP3 ”truncates the first generation of hydrothermal quartz-magnetite veinlets associated with P2, and is in turn cut by a second generation of quartz veins’ (Quadt 2011). The results were compared with the previously existing data and the P2 porphyry grain ages are shown to range from 7.772 ”0.135 Ma to 7.212 ”0.027 Ma. The maximum age for subvolcanic intrusion, solidification, and first hydrothermal veining of P2 is as late as 7.216 ”0.018 Ma (P2-11 is the most precise of the young group) when the zircons crystallized from the parent magma. ‘The EP3 porphyry truncated these veins and provided concordant single-grain ages with a range from 7.126 ”0.016 Ma to 7.164 ”0.057 Ma. It is ultimately concluded that the two intrusions are separated in age by .090 ”0.034Ma. With this data, it can be said that the two porphyries intruded within a period of 0.124 m.y. from each other. Figure 6. Concordia diagrams with isotopic dilution’thermal ionization mass spectrometry (ID-TIMS) results from three porphyries (A: KM10, KM2, 5091-400; B: KM5; C: D310) bracketing two main pulses of Cu-Au mineralization at Bingham Canyon (Utah, USA); Re-Os (molybdenite) data are from Chesley and Ruiz (1997). (Quadt et al., 2011) The second samples were taken from Bingham Canyon in Utah,USA and were found in pre-ore, syn-ore, and post-ore porphyry intrusions. All three of the porphyry intrusions were dated using ID-TIMS analysis and yielded the results seen in figure 6. It was found that two Cu-Au mineralization pulses occurred. The first is associated with a quartz monzonite porphyry which existed prior to the mineralization of the Cu-Au in the porphyry. A second pulse of Cu-Au is known to occur because it cuts through the latite porphyry and truncates the first veins. Thirty-one concordant ages were taken collectively from the three intrusions and the most precisely dated of the grains concluded all the porphyries overlap in an age range of 38.10 ‘ 37.78 Ma. A single outlying grain of younger age is present in the oldest intrusion and is thought to be attributed to residual Pb loss. Upon interpretation of the three porphyries and the two Cu-Au pulses, a window of .32Ma is the time it took for their occurrence. In all three of the intrusions there are significantly older concordant grains dated as far back as 40.5Ma which hosts a minimum life time of the magmatic reservoir to be .80 ‘ 2 million years in age. (Quadt et al., 2011) Errors in the analyzed zircon grains can be minimalized if crystals that have undergone Pb loss are avoided or have been removed by chemical abrasion. The lifetime of the mineralization of a single porphyry is important for alternative physical models of magmatic-hydrothermal ore deposits which are expected to be constrained to a lifetime of less than 100k.y. Comparison of the porphyry intrusions in both sites provided substantial evidence of the relatively short lifespan of their formation. In both sites, the two consecutive pulses occur >1M.y. apart, .09M.y. and .32M.y. respectively.

FLUID INCLUSIONS: Sn-W VEINS

Mineral deposits of Sn-W are commonly formed by the mixing of magmatic fluids with external fluids along the contact zones of granitic intrusions (Heinrich, 2007). Tin precipitation was proven to be driven by the mixing of hot magmatic brine with cooler meteoric water by using LA-ICP-MS to measure fluid inclusions taken before, during, and after the deposition of Cassiterite(Sn02). (Audetat, Gunther, and Heinrich 1998). The fluid inclusions that formed in minerals during the time of the ore formation recorded temperatures between 500-900”C at several kilometers depth. The average size range of the inclusions is between 5 and 50 micrometers. In order to prove the importance of fluid-fluid interaction in the formation of magmatic-hydrothermal ore deposits, the Yankee Lode was analyzed. The Yankee Lode is a magmatic-hydrothermal vein deposit located in eastern Australia and is a part of the Mole Granite intrusion. This vein consists of primarily quartz and cassiterite that’s well preserved in open cavities. Two quartz were analyzed, their crystals have the same pattern of hydrothermal growth and precipitation represented by successive zones of inclusions as seen in Figure 7.

Fig. 7 (A) Longitudinal section through a quartz crystal from the Yankee Lode Sn deposit, showing numerous trails of pseudosecondary fluid inclusions and three growth zones recording the precipitation of ilmenite, cassiterite, and muscovite onto former crystal surfaces. The fluid inclusions shown in the right part of the figure represent four different stages in the evolution from a magmatic fluid toward a meteoric water-dominated system. Thtot corresponds to the final homogenization temperature. (Audetat, Gunther, and Heinrich, 1998)

There are indications of boiling fluid throughout the entire history of the quartz precipitation due to the presence of both low-density vapor inclusions and high-density brine inclusions. Apparent salinities of both inclusions were taken using microthermometric measurements and ‘Pressure for each trapping stage was derived by fitting NaClequiv values and homogenization temperatures (Thtot) of each fluid pair into the NaCl-H2O model system’ (Audetat, Gunther, and Heinrich, 1998). This data basically shows that there were three pulses of extremely hot fluid injected into the system before cool water mixing and had a consecutive temporary increase in pressure. The pressure increases are noted along with some of the various fluid inclusions analyzed in Figure 7. In this system tin is the main precipitating ore-forming-element as represented in Figure 8. The initial Sn concentration of 20 wt% starts to drop drastically at the onset of cassiterite precipitation. By stage 23, represented in Figure 8C, only 5 wt% of the initial concentration of Sn remains. At this same stage in non-precipitating elements, the fluid mixture still contains 35% of the magmatic fluid indicating the chemical and cooling(thermal) effects of fluid mixing are the cause for the precipitation of cassiterite. Three pulses of magmatic fluid occurred before the formation of cassiterite was initiated in response to Sn precipitation, however the onset of cool meteroric groundwater mixing didn’t occur until the third pulse. This proves the fluid-fluid mixing is critical to the formation of trace elements (Audetat, Gunther, and Heinrich, 1998).

There is another component occurring in this system along with the precipitation of Sn, the magmatic vapor phase selectively transporting copper and boron into the liquid mixture represented by Figure 8D. Boron’s initial marked reduction occurs at stage 25 in Figure 8D, exactly where tourmaline begins to precipitate. Note that the concentration of B remained near its original magmatic value in stage 23 and 24 when simultaneously the none precipitating elements underwent substantial dilution. B also decreased in stages 26 and 27 relative to its initial value but not as much as would be expected considering the continual growing and extracting of B from the fluid to form tourmaline. Copper follows the same trend as Boron of having the same original magmatic value in stages 23 and 24, indicating an excess of these two elements. The vapor and brine inclusions in the vapor phase were found to be selectively enriched in Cu and B. This explains the excess to be condensation of magmatic vapor into the mixing liquids as Cu and B prefer to partition to the vapor phase as opposed to the saline liquid like the other elements. It has been suggested that Cu can be stabilized in a sulfur-enriched vapor phase as opposed to metals which stabilize in brine by chloro-complexes. Gold, Au, is thought to behave similarly to Cu which could explain why it is selectively coupled with Cu and As in high sulfidation epithermal deposits. (Audetat, Gunther, and Heinrich, 1998).

Fig. 8. (Left) Evolution of pressure, temperature, and chemical composition of the ore-forming fluid, plotted on a relative time scale recorded by the growing quartz crystal. (A) Variation in temperature and pressure, calculated from microthermometric data. Hot, magmatic fluid was introduced into the vein system in three distinct pulses before it started to mix with cooler meteoric groundwater. (B) Concentrations of non-precipitating major and minor elements in the liquid-dominant fluid phase, interpreted to reflect progressive groundwater dilution to extreme values. (C) A sharp drop in Sn concentration is controlled by the precipitation of cassiterite. (D) B and Cu concentrations reflect not only mineral precipitation (tourmaline) but also the selective enrichment of the brine-groundwater mixture by vapor-phase transport. (Audetat, Gunther, and Heinrich, 1998)

Fig. 9 (Right) Partitioning of 17 elements between magmatic vapor and coexisting brine, calculated from analyses of four vapor and nine brine inclusions in two ‘boiling assemblages.’ At both pressure and temperature conditions recorded in these assemblages, Cu and B strongly fractionate into the magmatic vapor phase. (Audetat, Gunther, and Heinrich, 1998)

SALT PRECIPITATION

Fluids are released from the upper crustal plutons associated with magmatic-hydrothermal systems. These fluids are usually saline and phase separation occurs into very low salinity vapors and high-salinity brines as discussed earlier. Salt precipitation can have a major impact on the permeability of a system and the ore formation along the liquid-vapor-halite curve making certain ore deposits precipitate out more than others. Halite-bearing fluid inclusions were analyzed from porphyry deposits using microthermometry to discover the inclusions can homogenize by halite dissolution. (Lecumberri-Sanchez et al., 2015).

Based on the hypothesis formed from the examination of fluid inclusions that there is widespread halite saturation in magmatic-hydrothermal fluids, further data was collected and studied. Roughly 11,000 fluid inclusions from 57 different porphyry systems were used to identify halite bearing inclusions. There were about 6,000 halite-bearing inclusions in the data set. These inclusions were then subdivided in to two different methods of homogenization, by vapor bubble disappearance or by halite dissolution and found that 90%, 52 out of the 57, of the porphyry systems homogenized by halite dissolution. The pressure at homogenization was then calculated based on the PVTX, pressure-volume-temperature-composition, properties of H2O-NaCl and found the pressures at fluid inclusion homogenization exceeds 300MPa. If significant fluid-inclusion migration is expected, several millimeters, then water loss can occur and would result in salinity changes as well as density changes. This however is not the proposed idea because migration of no more than a few micrometers is common. If no migration is evident, this leads the more plausible explanation that heterogeneous entrapment of halite due to highly variable temperatures, ”100”C, occurred. This means it is thought that halite saturation occurs at the time of trapping. The coexistence of vapor inclusions with homogenized brine inclusions are a result of halite saturation along the liquid-vapor-halite curve. Trapped halite found in the surface of another growing mineral has also been observed and means that, ‘heterogeneous entrapment of solid halite inside FIs is a natural consequence of halite saturation’ (Lecumberri-Sanchez et al., 2015).

Figure 10. Left: Pressure-salinity projection of the H2O-NaCl phase diagram at 400 ”C (Driesner and Heinrich, 2007) showing a potential mechanism for copper sulfide mineralization via halite (H) saturation. Destruction of the liquid (L) phase results in partitioning H2O to the vapor (V), and Cu and Fe to the solid phase. Right side shows the same process schematically. (Lecumberri-Sanchez et al., 2015)

Halite saturation usually occurs at the eutectic where vapor, liquid, and halite are all in existence together. Since halite precipitates at shallow crustal levels, other ore minerals are able to precipitate out of liquid. The Na,Cl, Fe& Cu-rich liquid + vapor phase traverses the phase boundary to the more stable vapor + halite stage as seen in Figure 10. Once this eutectic point is reached, liquid decreases and starts precipitating out the Cu-Fe sulfides (”Au) that was in liquid. It can be concluded that salt saturation acts as a precipitation mechanism in magmatic-hydrothermal fluids. This allows for the rapidly ascending vapor phase to transport sulfur and gold upward however the mechanism is limited by the availability of reduced sulfur. The disproportionation of SO2 similarly occurs at temperatures around which halite saturation occurs which provides the needed sulfur. This indicates that salinity is not the only key component to the formation of magmatic-hydrothermal deposits, sulfur is of equal if not more importance. Lecumberri-Sanchez et al., 2015).

SULFUR in a Porphyry Cu-Au-Mo System

In order to better understand the role sulfur plays in high temperature metal segregation by fluid phase separation, two porphyry Cu-Au-Mo deposits were examined along with two granite related Sn-W veins, and barren miarolitic cavities. The fluid inclusion assemblages underwent microthemometric analysis to measure salinities. No modification after entrapment occurred and the temperature range for homogenization of the brine inclusions was between 323”8 to 492”8 ”C. This indicates heterogeneous entrapment of variable temperatures, ”100”C, signifying halite saturation at the time of fluid inclusion trapping. LA-ICP-MS was used to measure absolute element concentrations with Na as a standard. The results were coupled with the microthemometry data to estimate the P-T conditions of the brine + vapor entrapment. (Seo et al., 2009)

Sulfur quantification in fluid inclusions was done by using two different ICP-MS instruments, a sector-field MS and the quadrupole MS, on homogeneous inclusions with similar salinities (42.4” 1.2 NaCl equiv.wt%). The size of the inclusions being analyzed can inhibit the ability to detect sulfur. The results of the quantification are such that the dominant components of the coexisting brine-vapor inclusions are NaCl, KCl, FeCl2, Cu and S. The concentrations of Cu to S are very similar and follow the same trend as seen in Figure 11 when normalized to Na (the dominant cation component). Figure 11 shows the correlation of S/Na to Cu/Na with a slope of 1 and mole ratio of 2:1, S:Cu. Figure 12 represents the fractionation behavior of how some elements prefer the brine and some prefer vapor. The elements are normalized to Pb which prefers brine and shows Au, Cu, and S are clearly correlated in their partitioning in to vapor. Figure 11 and 12 also indicates the significance of the environment in which the samples had formed. The Sn-W samples show the concentrations of Cu and S to prefer vapor where as in the porphyry Cu-Mo-Au samples show a Cu and S enrichment in the vapor phase relative to the salt components but the absolute concentrations in vapor are lower than in the brine. The overall combination of the two fluid phases in the porphyry Cu-Mo-Au are much higher in S, Cu, and Au than those in the Sn-W mineralizing fluids. The importance of sulfur and chloride as complexing agents in both of the fluid phases can be represented by the exchange equilibria:

Exchange equilibria (1) shows the preferred equilibria shift is towards Cu-S complexes in vapor and (2-4) shows stabilizing K, Na, and Fe as chloride complexes in brine. The main significance is Cu prefers stabilization in vapor with the addition of S. (Seo et al., 2009)

This means that salinity is not the main contributing factor to the formation of Cu deposits. S is now known to be important since the ”efficiency of copper extraction from the magma is determined by the sulfur concentration in the exsolving fluids’ (Seo et al., 2009). Magmatic sulfide melt inclusions have been observed and may have formed at the time of fluid saturation in the magma. Copper is precipitated out of brine and vapor as chalcopyrite (CuFeS2) and/or bornite (Cu5FeS4) once cooled. The Cu and S enriched vapor phase has the greatest contribution. (Seo et al., 2009)

Fig. 10(Next Page) Concentrations of sulfur and copper in natural magmatic’hydrothermal ‘uid inclusions. Co-genetic pairs of vapor+brine inclusions (‘boiling assemblages’) in high temperature hydrothermal veins from porphyry Cu’Au’Mo deposits (orange to red symbols), granite related Sn’W deposits (blue’green), and a barren granitoid (black’ gray)are shown. All vapor(a) and brine inclusions(b) have sulfur concentrations equal to copper or contain an excess of sulfur (the S: Cu=1: 1 line approximates a 2: 1 molar ratio). Element ratios (c), which are not influenced by uncertainties introduced by analytical calibration (Heinrich et al.,2003), show an even tighter correlation along and to the right of the molar 2:1 line, with Cu/Na as well as S/Na systematically higher in the vapor inclusions (open symbols) than in the brine inclusions (full symbols). Averagesof3’14single ‘uid inclusions in each assemblage from single healed fractures are plotted, with error bars of one standard deviation. Scale bars in the inclusion micrographs represent 50 ”m.(Seo et al., 2009)

Fig. 11(Above). Partitioning of elements between co-genetic vapor and brine inclusions. Fluid analyses including sulfur and gold are normalized to Pb, which is most strongly enriched in the saline brine (Seward, 1984). S, Cu, Au, As and sometimes Mo preferentially fractionate into the vapor relative to the main chloride salts of Pb, Fe, Cs, K and Na. A close correlation between the degrees of vapor fractionation of S,Cu and generally also Au indicates preferential sulfur complexation of these metals in the vapor. The two boxes distinguish assemblages in which absolute concentrations of Cu and S are higher or lower in vapor compared with brine. This grouping correlates with geological environment, i.e, the redox state and pH of the source magmas and the exsolving ‘uids. (Seo et al., 2009)

Conclusion:

Throughout the many years of research multiple types of analysis have been performed such as LA-ICP-MS, sector-field MS, microthermometry, quadrapole MS, and ID-TIMS. Zircon crystals were dated to provide ages of the magmatic system in which the ore deposits formed as well as help recognize multiple pulses can occur within the same system >1M.y. apart. Fluid inclusions have been examined in great detail to bring further insight in to the magmatic pulses. These pulses are critical to fluid-fluid mixing which in turn effects the precipitation of Sn, forming cassiterite, in Sn-W veins. There are however multiple different environments for deposits to form. Porphyry-epithermal Cu-Au-Mo deposits precipitate different elements. Vapor-liquid fractionation in the porphyry-epithermal system between coexisting brine and vapor is due to the increased transport of Cu and Au in sulfur-enriched acidic magmatic-hydrothermal vapors (Pokrovski et al., 2007).

The formation of magmatic-hydrothermal ore deposits was once thought to be mainly dependent on the salinity of the fluid, hypersaline or vapor. Salinity can be used to recognize an elements referential fluid. For example, Cu and Au prefer low salinity vapors as opposed to coexisting hypersaline fluid and elements such as Pb and Fe prefer hypersaline conditions (Williams-Jones and Heinrich, 2005). Salinity can also serve as a precipitation mechanism for Cu and Au into vapor phase however it has been discovered that reduced sulfur must be present. Fluid phase separation is critical for Cu and Au to partition in to the vapor phase which is aided by sulfur-enriched acidic magmatic-hydrothermal vapors. Sulfur is in turn essential for metal transport in fluids and increasing the solubility of Cu and Au. The low salinity Cu-Au-Mo rich vapor phase is greatest contributor to Cu-Au deposits. (Pokrovski et al., 2007)

References:

Hedenquist, Jeffrey W., and Jacob B. Lowenstern. “The role of magmas in the formation of hydrothermal ore deposits.” Nature 370.6490 (1994): 519-527.
Audetat, Andreas, Detlef G”nther, and Christoph A. Heinrich. “Formation of a magmatic-hydrothermal ore deposit: Insights with LA-ICP-MS analysis of fluid inclusions.” Science 279.5359 (1998): 2091-2094.
Heinrich, Christoph A. “Fluid-fluid interactions in magmatic-hydrothermal ore formation.” Reviews in Mineralogy and Geochemistry 65.1 (2007): 363-387.
Seo, Jung Hun, Marcel Guillong, and Christoph A. Heinrich. “The role of sulfur in the formation of magmatic’hydrothermal copper’gold deposits.” Earth and Planetary Science Letters 282.1 (2009): 323-328.
Von Quadt, Albrecht, et al. “Zircon crystallization and the lifetimes of ore-forming magmatic-hydrothermal systems.” Geology 39.8 (2011): 731-734.
White, Noel C., and Jeffrey W. Hedenquist. “Epithermal gold deposits: styles, characteristics and exploration.” SEG newsletter 23.1 (1995): 9-13.
Lecumberri-Sanchez, Pilar, et al. “Salt Precipitation In Magmatic-Hydrothermal Systems Associated With Upper Crustal Plutons.” Geology 43.12 (2015): 1063-1066. Environment Complete. Web. 20 Apr. 2016.
Pokrovski, Gleb S., Anastassia Yu Borisova, and Jean-Claude Harrichoury. “The effect of sulfur on vapor’liquid fractionation of metals in hydrothermal systems.” Earth and Planetary Science Letters 266.3 (2008): 345-362.
Williams-Jones, Anthony E., and Christoph A. Heinrich. “100th Anniversary special paper: vapor transport of metals and the formation of magmatic-hydrothermal ore deposits.” Economic Geology 100.7 (2005): 1287-1312.
Simmons, Stuart F., and Kevin L. Brown. “Gold in magmatic hydrothermal solutions and the rapid formation of a giant ore deposit.” Science 314.5797 (2006): 288-291.

essay-2016-05-03-000B1q

Improving agricultural productivity (focus on Tanzania): online essay help

Abstract:

Agriculture is the most lucrative factor of Tanzania’s economy. The sector accounts for 26.8% of the GDP, and about 80% of the workforce. However, only a quarter of the 44 million hectares of land in Tanzania is used for agriculture. The biggest aspects of Tanzania’s low agricultural productivity is lack of response to changing weather patterns, the lack of a consistent farming system, the lack of awareness of different farming systems. Therefore, in this meta-analysis, the possibility of agricultural productivity improving was examined by evaluating the effectiveness of GM crops, with the assistance of either nitrogen fertilizers or legumes for biological nitrogen fixation. Original studies for inclusion in this meta-analysis were identified through keyword searches in relevant literature databanks such as Deerfield Academy’s Ebscohost Database, Google Scholar and Google. After an evaluation of many studies, GM crops could be a solution under many conditions. Companies like Monsanto are willing to either allow farmers to save and exchange seeds without penalty OR are willing to as the WEMA project claims continuously supply these seed varieties as requested by farmers. Scientists perform a study that is transferable from one area to another, in terms of the different agronomic and environmental choices that is necessary to either implement either an increase in fertilizer use or legume biological nitrogen fixation. Farmers are educated and receptive of GM technology, nitrogen fertilizer and legume biological nitrogen fixation. This includes the effectiveness and effiency of all three systems. Commercial banks, the government, donors are willing to sponsor the increase in fertilizer use or subsidize the costs.

Introduction

Agriculture is the most lucrative factor of Tanzania’s economy. The sector accounts for 26.8% of the GDP, and about 80% of the workforce. However, only a quarter of the 44 million hectares of land in Tanzania is used for agriculture. Even with only a quarter of the land used, most is damaged by soil erosion, low soil productivity and land degradation. This is a result of several agricultural and economic problems including poor access to improved seeds, limited modern technologies, dependence on rain-fed agriculture, lack of education on updated farming techniques, limited funding by the government and availability of fertilizers. Tanzanian agriculture is characterized primarily by small-scale subsistence farming, and so approximately 85 percent of the arable land is used by smallholders cultivating between 0.2 ha and 2.0 ha. Tanzania devotes about 87% of their land to food crops, which include mainly banana, cassava, cereal, pulses and sweet potatoes. The other 13% is used for cash crops that include cashew, coffee, pyrethrum, sugar, tea and tobacco. Tanzania’s food crop production yields are estimated to be only 20-30% of potential yields. The average food crop productivity in Tanzania stood at about 1.7 tons/ha far below the potential productivity of about 3.5 to 4 ton/ha.

The biggest aspect of Tanzania’s low agricultural productivity, is the dependence on rain-fed agriculture, lack of a consistent farming system and the lack of awareness of different farming systems. Because of this, many studies have been done to promote either the more traditional approach of chemical fertilizer use, the genetic approach of GM crops or a more sustainable approach of using legumes for nitrogen fixation. I will be evaluating these three methods in this study. Both chemical fertilizers and legumes are currently being used by mostly uneducated Tanzanian farmers, but a very low level.

This study focuses on each farming system in relation to maize especially. This is because maize is the most preferred staple food and cash crop in Tanzania. Maize is grown in all agro-ecological zones in the country. Over two million hectares of maize are planted per year with average yields of between 1.2–1.6 tonnes per hectare. Maize accounts for 31 percent of the total food production and constitutes more than 75 percent of the cereal consumption in the country. About 85 percent of Tanzania’s population depends on it as an income-generating commodity. It is estimated that the annual per capita consumption of maize in Tanzania is over 115Kg; national consumption is projected to be three to four million tonnes per year.

A GM trial has just officially started last October, in the Dodoma region, a semi-arid area in the central part of the country. Tanzania took a long time to approve this trial because of its strict liability clause in the Environment Management Biosafety Regulations that stated that scientists, donors and partners funding research would be held accountable in the event of any damage that might occur during or after research on GMO crops. However, it was revised, and the trial began. It sets out to demonstrate whether or not a drought- tolerant GM white maize hybrid developed by the Water Efficient Maize for Africa (WEMA) project can be grown effectively in the country. Because of Tanzania’s dependence on rain-fed agriculture, this initiative could provide hope in increasing agricultural productivity of not only corn, but other food and cash crops. The project is funded by the U.S. Agency for International Development, the Bill and Melinda Gates Foundation and the Howard G. Buffett Foundation. The gene comes from a common soil bacterium and was made by Monsanto, a sustainable agriculture company that develops better seeds and systems to help farmers with productivity on the farm and grow more nutritious food while conserving natural resources, under the WEMA project. The GM seeds are affordable to farmers who works on relatively small plots of land. The corn is expected to increase yields by 25% during modern drought.

Nitrogen fertilizers (NF) are the conventional method, therefore it has the most recognition, but also the most controversy. NFs has boosted the amount of food that farms can produce, and the number of people that can be fed by farmers meeting crop demands for nitrogen and increase yield. The annual growth rate of nitrogen fertilizers in the world is 1.3%. Of the overall increase in demand for 6 million tons nitrogen between 2012 and 2016, 60 percent would be in Asia, 19 percent in America, 13 percent in Europe, 7 percent in Africa and 1 percent in Oceania.However, NFs have been linked to numerous environmental hazards including marine eutrophication, global warming, groundwater contamination, soil imbalance and stratospheric ozone destruction. In particular, in Sub-Saharan Africa, including Tanzania, nitrate runoff and leaching mainly from commercial farms have led to excessive eutrophication of fresh waters and threatened the lives of various fish species. However, this is because of the lack of understanding on the farmers of how much to use for a plot versus Tanzanian farmers having too much access to fertilizers. There also many health effects such as babies can ingest high nitrogen levels of water, and gets sick with gastrointestinal swelling and irritation, diarrhea, and protein digestion problems. Nitrogen leaches into groundwater as nitrate, which has been linked with blue-baby syndrome in infants, adverse birth outcomes and various cancers. Economically speaking, nitrogen fertilizers have become a huge cost in agriculture.

Legume nitrogen fixation provides a sustainable alternative to the costly and environmentally unfriendly nitrogen fertilizer for small-scaled farms. Biological nitrogen fixation is the process that changes inert N2 into biologically useful NH3 by plants. Perennial and forage legumes, such as alfalfa, sweet clover, true clovers, and vetches, may fix 250–500 pounds of nitrogen per acre. In a study that compared the environmental, energetic and economic factors of organic and conventional farming systems, in legume-based farming, the crop yields and economics of organic systems compared with the conventional definitely varied based on the type of crop, region and growing conditions however the environmental benefits attributable to reduced chemical inputs, less soil erosion, water conservation and improved soil organic matter were consistently greater in organic systems using legumes. However, there are many factors that also need to be in place for legumes to be the best option, this includes considering the best growing system, growing conditions, and non-fixing crops to grow with it.

The reason I wanted to study agriculture in Tanzania, particularly, is because of my love for the country after spending two weeks there learning about sustainable development and sustainable agriculture. Understanding the impact that agriculture has on the people and the economy is very inspirational to me, and connects to my passion in easing global hunger.

The purpose of this study is to provide a solution to Tanzania’s long standing fight against improving agricultural yields with a heavy consideration on drought tolerance through evaluating GMO crops, with the assistance of either increasing nitrogen fertilizer use or legumes for biological nitrogen fixation. With each approach comes many obstacles and challenges but also rewards if done properly. I believe the reason none of these methods have taken dominance is because of the lack of proper implementation, maintenance and funding. Therefore, the study will also address those concerns with each, and discuss a plan to follow if GM crops are used with either legumes or nitrogen fertilizers or both.

Methods and Materials

Original studies for inclusion in this meta-analysis were identified through keyword searches in relevant literature databanks such as Google, Google Scholar and Deerfield Academy’s Ebscohost Database. I searched combinations of keywords to agriculture in Tanzania, GM technology, chemical fertilizer use in Tanzania and legume nitrogen fixation. Concrete keywords used related to agriculture in Tanzania were “agriculture in Tanzania,”problems affecting agriculture in Tanzania,” “farm yields in Tanzania.” Concrete keywords used related to GM technology were “GM crops,” “GM trial in Tanzania,” “Impact of GM crops,” “drought tolerant maize,” “herbicide tolerant,” “insect resistant.” Concrete words used related to chemical fertilizer use in Tanzania includes “fertilizer assessment in Tanzania,” “fertilizers costly in Tanzania,” “environmental impacts of fertilizer use in Tanzania,” “economic impacts of fertilizers in Tanzania.” Concrete keywords I searched for legume nitrogen fixation were “legume nitrogen fixation,” “improving yields with legumes,” “best legumes for nitrogen fixation,” “economic impact of legume nitrogen fixation.” The search was completed by February 2017.

Most of the publications on Google were news articles and articles in academic journals and website pages while Google Scholar and Deerfield Academy’s Ebscohost Database comprised of book chapters, conference papers, working papers, academic journals and reports in institutional series. Articles published in academic journals were all passed through a peer-reviewed process. Some of the working papers and reports are published by research institutes or government organizations, while others are NGO publications.

Each published work had to meet certain criteria to be included.

If it is a news article, it had to be from credible news sources like the Guardian, New York Times and Washington Post.
If it is from a academic journal, it had to be from a credible organization, institution or university like the World Bank, the UN, Wellesley College.

The study is an empirical investigation of the economic, health, or environmental impacts of GM crops in particular GM maize, legume nitrogen fixation, and chemical fertilizers, with focus on Tanzania.

The study reports the impacts of GM crops, legume nitrogen fixation, and chemical fertilizers, with focus on Tanzania, in terms of one of more of the following outcome variables: yield, farmer profits, environmental, economic and health advantages and disadvantages.

Results and Discussion

Problems with maize production

According to The African Agricultural Technology Foundation, in a policy brief detailing the WEMA project, despite the importance of maize as the main staple crop, average yields in farmers’ per hectare compared to the estimated potential yields of 4–5 metric tonnes per hectare. While farmers are keen on increasing maize productivity, their efforts are hampered by a wide range of constraints. The Foundation has identified three reasons for the low productivity of maize, which can be applied to any crop in Tanzania that grows in a semi-arid region:

Inadequate use of inputs such as fertiliser, improved maize seed and crop protection chemicals. The inputs are either not available or too expensive for the farmers to afford

Inadequate access to information and extension services. Many farmers continue to grow unsuitable varieties because they have no access to information about improved maize technologies due to the low levels of interaction with extension

Drought is a major threat to maize production in many parts of Tanzania. Maize production can be a risky and unreliable business because of erratic rainfall and the high susceptibility of maize to drought. The performance of local drought-tolerant cultivars is poor. Maize losses can go as high as 50 percent due to drought related stress.

These constraints highlight exactly what the problem is with increasing productivity. Without improving these three constraints for all crops and farmers, Tanzania’s agricultural productivity can never increase.

GM Crop Evaluation

Transgenic plants are species that have been genetically modified using recombinant DNA technology. Scientists have turned this method for many reasons including: engineering resistance to abiotic stresses like drought, extreme temperatures or salinity, or biotic stresses, such as insects and pathogens, that would normally be detrimental to plant growth or survival. In 2007, for the twelfth consecutive year, the global area of biotech crops planted continued to increase, with a growth rate of 12% across 23 countries. As of 2010, 14 million farmers from 25 countries, including 16 developing countries grow GM crops.

Right now, South Africa is the only African country that has completely implemented GMO crops including HT/Bt/HT-Bt cotton, HT/Bt/HT-Bt maize and HT soybean which are some of Tanzania’s major food and cash crops. South Africa has gained an income of US$156 million since the country switched mostly to biotech crops from 1998-2006. A study published in 2005 by Marnus Gouse is a Researcher in the Department of Agricultural Economics, Extension and Rural Development at the University of Pretoria, South Africa, involved 368 small and resource-poor farmers and 33 commercial farmers, the latter divided into irrigated and dry-land maize production systems. The data indicated that under irrigated conditions Bt maize resulted in an 11% higher yield, a cost savings in insecticides of US$18/ha equivalent to a 60% cost reduction, and an increase income of US$117/hectare. Under rain-fed conditions Bt maize resulted in an 11% higher yield, a cost saving on insecticides of US$7/ha equivalent to a 60% cost reduction, and an increased income of US$35/hectare.

Richard Sitole, the chairperson Hlabisa District Farmers’ Union, KZN in South Africa, said 250 emergent subsistence farmers of his Union planted Bt maize on their smallholdings, averaging 2.5 hectares, for the first time in 2002. His own yield increased by 25% from 80 bags for conventional maize to 100 bags, earning him an additional income of US$300 as of November 2007. He said, “I challenge those who oppose GM crops for emergent farmers to stand up and deny my fellow farmers and me the benefit of earning this extra income and more than sufficient food for our families.”

Because South Africa has the necessary resources, funding and experience in biotech crops, it can thrive in both the international public and private sector, and therefore improve their technology just as the other 23 countries can. Therefore, it is up to South Africa especially t share this knowledge with farmers in other African countries, but in particular, Tanzania, so that Tanzanian farmers can advance agriculture just as South Africa has, if this deems to be the best route to take.

NGO Opposition to GM Crops

Genetically modified crops have been opposed for several years by non-governmental organizations. Because they are not for profit, they have gained more social trust, and so people listen to them. Much of the NGO opposition has been from European-based organizations such as Greenpeace International, and Friends of the Earth International. Many U.S. and Canadian based organizations have also joined these organizations in the anti-GMO campaign. Notice, these are all rich countries, who have the influence over poorer countries. This kind of influence is harmful to countries that do not have the research or experience with GMOs such as Tanzania. People from Europe and North America would obviously not be attracted to GMOS because farming is already very productive. As many as 60 percent of all people are poor farmers could benefit from this technology. Farmers in poor countries rely almost entirely on food crops, not on crops for animal feed or industrial use like the U.S., so today’s ban on GMO foods is specifically damaging to those poor farmers. It becomes more shameful still when anti-GMO campaigners from rich countries intentionally hide from developing country citizens the published conclusions of their own national science academies back home, which continue to show that no convincing evidence has yet been found of new risks to human health or the environment from this technology.

Therefore, if GMOs were to be implemented in Tanzania, farmers would have to be trained and taught of the many benefits of GMOs. This training should be provided by the organizations that are providing the GMO seeds such as Monsanto. Without this training, GMOs crops could fail just like other methods, because of lack of knowledge and maintenance.

Importance of Seed-Saving

More than 90% of seeds sown in by farmers are saved on their own farms. Saving and exhanging seeds is important to Tanzanian farmers, and farmers in general for several reasons. According to the Permaculture Research Institute, saving seeds is important because big corporations that farmers buy from are only interested in the most profitable hybrids and ‘species’ of plants. Therefore, it decreases the biodiversity by condensing the market and discontinuing many crop varieties. When farmers save seeds with good genes and strong traits, the likely hood of better quality and the crops’ ability to adapt to its environment increases. Over generations, the crops will develop stronger resistance to pests as well. However, if GMO seeds provided by Monsanto were the sole practice in Tanzania, farmers could not save or exchange their seesd. As explained on the Monsanto website, “When farmers purchase a patented seed variety, they sign an agreement that they will not save and replant seeds produced from the seed they buy from us.” Therefore, unless USAID, the Bill and Melinda Foundation, and other organizations plan to support the costs of buying seeds on a regular basis, farmers will not be able to maintain their farms if they cannot afford to buy GMO seeds. Tanzanian farmers would be put at risk if this system was implemented without any financial support, and if they were to save or replant seeds, they have a chance of going to trial. Seeds, however, are so important to Tanzanians. Joseph Hella, a Professor from Sokoine University of Agriculture in Morogoro, Tanzania, in a documentary called Seeds of Freedom in Tanzania, insisted that “any effort to improve farming in Tanzania depends primarily on how we can improve farmers’ own indigenous seeds.” The practice of GMO crops does not take this into account. Janet Maro, director of Sustainable Agriculture Tanzania, said “These seeds are our inheritance, and we will pass them on to our children and grandchildren. These too are quality seed and a pride for Tanzania. But the law does not protect these seeds.”

However, if the drought tolerant white maize trial works, WEMA claims that farmers can choose to save the seeds for replanting. But as with all hybrid maize seed, maize production is heavily reduced with replanting of the harvested grain. Also, in order to make the improved seeds affordable, the new varieties will be licensed to the African Agricultural Technology Foundation (AATF), and distributed through local see suppliers on a royalty-free basis. According to Oliver Balch, freelance writer specialising in the role of business in society, if companies like Monsanto end up monopolizing the seed industry, African farmers fear becoming locked into cycles of financial obligation and losing control over local systems of food production. This is because unlike traditional seeds, new drought-tolerant seeds have to be purchased annually.

Lack of accessibility

The biggest problem Tanzania faces with adapting to drought-tolerant GM seeds is unavailibility and unaffordability. According to a study, Drought tolerant maize for farmer adaptation to drought in sub-Saharan Africa: Determinants of adoption in eastern and southern Africa, six African countries were studied to discuss the different setbacks to using drought-tolerant seeds.On a figure that represented these setbacks, seed availibility and seed price was the biggest concerns for Tanzanian small-holder farmers. High seed price was a commonly mentioned constraint in Malawi, Tanzania, and Uganda. Because many Tanzanian and Malawian farmers grow local maize, the switch to DT maize would entail a substantial increase in seed cost. Another observation in the study was that compared to younger people, older households were more likely to grow local maize which could reflect the unwillingness of older farmers to give up familiar production practices. Households with more educated people were more likely to grow DT maize and less likely to grow local maize, which justifies the point that general education and education on GM crops should be the primary goal before implementing any method in Tanzania. For example, some Tanzanian farmers were unwilling to try DT maize varieties as they were perceived as low yielding, late maturing and labor increasing. Educated people are more likely to process information about new technologies more quickly and effectively.

According to the study, there are a few things that need to be implemented if DT maize is to thrive. The seed supply to local markets must be adequate to allow farmers to buy, experiment with, and learn about DT maize. Second, to make seed more accessible to farmers with limited cash or credit (another major barrier), seed companies and agro-dealers should consider selling DT maize seed in affordable micro-packs. Finally, enhanced adoption depends on enhanced awareness, which could be achieved through demonstration plots, field days, and distribution of print and electronic promotional materials.

According to the Third World Network and African Centre for Biodiversity (ACB), the Wema project is set out to shift the focus and ownership of maize breeding, seed production and marketing almost exclusively into the private sector, in the process, forces small-scale farmers in Sub-Saharan Africa into the adoption of hybrid maize varieties and their accompanying synthetic fertilizers. Gareth Jones, ACB’s senior researcher says that Monsanto and the the rest of the biotechnology industry are using this largely unproven technology to weaken biosafety legislation on the continent and expose Africa to GM crops generally. With Tanzania’s unpredictable weather and seeds being incapable of growing without certain conditions like fertilizers, purchasing seeds annually becomes more of a burden and reduces farmers’ flexibility regarding their farming decisions. Also, Gareth Jones says the costly imputs and the very diverse agro-ecological systems in Sub-Saharan Africa mean the the WEMA project will only benefit a select amount of small-scale farmers, with evidently no consideration to the majority who will be abandoned. Again, the argument of seed costs and the monopoly of big seed companies comes up again as Jones also notes that the costs and technical requirements of hybrid seed production are presently also beyond the reach of most African seed companies and a focus on this market will inevitably lead to industry concentration, as has happened elsewhere, enabling the big multinational agro chemical seed companies to dominate.

Lack of progress in drought-tolerance

The United States is an example to take into consideration when evaluating GM crops, because after more than 17 years of field trials, only one GM drought-tolerant maize has been released. In fact, according to Gareth Jones, independent analysis has shown, under moderate drought conditions, the particular maize variety that has been reliaed only increased maize productivity by 1% annually, which is equivalent to improvements gained in conventional maize breeding.

Monsanto’s petition to the USDA cites results from two growing seasons of field trials in several locations in the United States and Chile that faced varying levels of water availibity. Company scientists measured drought through the amount of moisture in soil, and compared the crop’s growth response with that of conventional commercial varieties of corn grown in regions where the tests were performed. Monsanto reported a reduction in losses expected under moderate drought of about 6 percent, compared with non-GE commercial corn varieties, although there was considerable variability in these results. That means that farmers using Monsanto’s cspB corn could see a 10 percent loss of yield rather than a typical 15 percent loss under modern drought- or an increase of about 8 bushels per acre, based on a typical 160-bushel non-drought yield. However Monsanto’s cspB corn, the USDA asserts that it is effective primarily under moderate, not severe, drought conditions so there is no real benefit under extreme drought conditions. Because the cspB corn isn’t beneficial under severe drought conditions, farmers This would not be effective in semi-arid regions in Tanzania, like the Dodoma region that is drought-strickened.

Former Environmental Secretary Owen Paterson accused the EU and Greenpeace of condemning millions of people in developing countries to starvation and death by their stubborn refusal to accept the benefits of genetically modified crops. In response to this, Esther Bett, a farmer from Eldoret in Kenya, said last week: ”It seems that farmers in America can only make a living from GM crops if they have big farms, covering hundreds of hectares, and lots of machinery. But we can feed hundreds of families off the same area of land using our own seed and techniques, and many different crops. Our model is clearly more efficient and productive. Mr Paterson is wrong to pretend that these GM crops will help us at all.” Million Belay, coordinator of the Alliance for Food Sovereignty in Africa, highlights that “Paterson refers to the use of GM cotton in India. But he fails to mention that GM cotton has been widely blamed for an epidemic of suicides among Indian farmers, plunged into debt from high seed and pesticide costs, and failing crops.”

He also declared that,

“The only way to ensure real food security is to support farmers to revive their seed diversity and healthy soil ecology.”

Legume Biological Nitrogen Fixation vs. Nitrogen Fertilizers

The sustainable practice of intercropping nitrogen- fixing legumes with cash and food crops comes with both pros and cons. For farmers who cannot afford nitrogen fertilizer, biological nitrogen fixation (BNF) is could be a key solution to sourcing nitrogen for crops. BNF can be a major source of nitrogen in agriculture when symbiotic N2- fixing systems are used, but the nitrogen contributions from nonsymbiotic microorganisms are relatively minor, and therefore requires nitrogen fertilizer supplementation. The amount of nitrogen input is reported to be as high as 360 kg N ha-1. Legumes serve many purposes including being primary sources of food, fuel , and fertilizer, or to enrich soil, preserve moisture and prevent soil erosion. According to a study, Biological nitrogen fixation and socioeconomic factors for legume production in sub-Saharan Africa: a review, that reviews past and on-going interventions in Rhizobium inoculation in the farming systems of Sub-Saharan Africa, the high cost of fertilizers in Africa and the limited market infrastructure for farm inputs, current research and extension efforts have been directed to integrated nutrient management, in which legumes play a crucial role. Research on use of Rhizobium inoculants for production of grain legumes showed it is a cheaper and usually more effective agronomic practice for ensuring adequate N nutrition of legumes, compared with the application of N fertilizer.

Tanzania’s total fertilizer consumption was less than 9 kilograms (kg) of fertilizer nutrient per hectare of arable lands in 2009/10, compared with Malawi’s 27 kilogrammes and 53 kilogrammes in South Africa and that represented a substantial increase from the average 5.5 kg/ha that was used four years ago. 82 percent of Tanzanian farmers do not use fertilizer mainly because they lack knowledge of its benefits, rising cost of fertilizer, and not knowing how to go about accessing credit facilities. Although commercial banks in the country claim that they support agriculture, many farmers continue to face hurdles in readily accessing financing for agricultural activities, including purchasing fertilizer. The lack of high-yield seed varieties and level of fertilizer use of either traditional or improved seeds is a major contributor to low productivity in Tanzania and thus the wide gap between potential yields and observed yields.

Many will believe that nitrogen fertilizers are mostly responsible for eutrophication and the threatening of fish species. However, according to Robert Howarth, a biogeochemist, ecosystem scientist, active research scientist and professor at Cornell Univesersity, says that the real perpetrators of this in countries like Tanzania are the insufficient treatement of water from industries, erosion in infrastructure construction, runoff of feed and food waste from both municipal and industrial areas, atmospheric nitrogen deposition and nutrient leaching. In fact, in Tanzania, the average nitrogen balance in Tanzania in 2000 was as low as -32 Kg N ha-1 yr-1. This amount was similar to many other Sub-Saharan countries.

However, if Tanzania is to continue using nitrogen fertilizers, the nitrogen agronomic use efficiency needs to be improved. Nitrogen agronomic use efficiency is defined as the yield gain per unit amount of nitrogen applied, when plots with and without nitrogen are compared.** Right now smallholder farmers fields are still low because of poor agronomic practices, including blanket fertilizer recommendations, too low fertilizer application rates to result in significant effect and unbalanced fertilization.Recent interventions in Sub-Saharan Africa, including fertility management showed that nitrogen agronomic use efficiency could be doubled when good agronomic practices are adopted.The dilemma in SSA, including Tanzania, farming is mainly practiced by resource-disadvantaged smallholder farmers who cannot afford most of the inputs at the actual market prices.

In a study called Narrowing Maize Yield Gaps Under Rain-fed conditions in Tanzania: Effect of Small Nitrogen Dose, the authors evaluated the potential of the use of small amount of nitrogen fertilizer as a measure to reduce maize yield gap under rain fed conditions.From the experiment, it was observed that grain yields were similar in all water stressed treatments regardless of nitrogen dose, suggesting that water stress imposed after critical growth stage has no significant effect on final grain yield. The explanation they came up with is that within 45-50 days after sowing, the plant should have accumulated the required biomass for grain formation and filling, and water stress occuring afterwards has no effect on yield. For resource poor farmers, low doses of nitrogen fertilizer applied after crop establishment may make a substantial contribution to the food security over non-fertilized crop production. This approach can work well in environments with low seasonal rains because yield gain is higher than when high nitrogen quantities are applied in water scarce environment. In the study’s conclusion, it highlights that there is a limitation as the yield gap narrowing strategy was evaluated at a plot scale. Further study is needed to investigate the necessary response of small nitrogen doses as a strategy in bridging the maize yield gaps in multiple fields and many seasons especially under farmer’s management.

Conclusion:

To increase agricultural productivity, there are many factors to consider, drought-tolerance is just one of them. Semi-arid regions in Tanzania pose a serious problem for agriculture that depends on rainfall, however drought- tolerant GM crops could be a possibility, however a lot of work still needs to be done. Implementing these drought-tolerant seed varieties can only be a solution if:

The WEMA project for GM white maize is successful

Companies like Monsanto are willing to either allow farmers to save and exchange seeds without penalty OR are willing to as the WEMA project claims continuously supply these seed varieties as requested by farmers. This ensures that farmers are given the flexibility to control their crop production.

Scientists perform a study that is transferable from one area to another, in terms of the different agronomic and environmental choices that is necessary to either implement either an increase in fertilizer use or legume biological nitrogen fixation.

Farmers are educated and receptive of GM technology, nitrogen fertilizer and legume biological nitrogen fixation. This includes the effectiveness and efficiency of all three systems.

Commercial banks, the government, donors are willing to sponsor the increase in fertilizer use or subsidize the costs.

2017-3-5-1488741967

Working with hazard group 2 organisms within a containment level 2 laboratory

There are many aspects that must be reviewed when entering the laboratory and there are many regulations that need to be followed to ensure not just your own safety but the safety of your workers around you. Inhalation is one issue that could occur within the laboratory. Within the laboratory many procedures involve the breaking of fluids containing organisms and the scattering of tiny droplets names aerosols. These droplets have the potential to fall contaminating hands and benches while others are very small and dry out immediately. The organisms containing within the aerosol is names droplet nuclei and is airborne and move about in small air currents. If inhaled there could be potential risk of infection so it is important that nothing is inhaled within the laboratories.

Ingestion of organisms is another problem within the laboratory. There are many ways in which organisms may be introduced into to the mouth such as thorough using the mouth to the pipette by direct ingestion, fingers contaminated by handling spilled cultured or from aerosols can potentially transfer micro-organisms to the mouth directly or indirectly by eating, nail biting, licking labels etc. injection is another problem within the laboratory through infectious materials that may be injected by broken culture containers, glass Pasteur pipettes or other broken glass or sharp objects. Through the skin and eye small abrasions or cuts on the skin may not be visible to the naked eye and may allow microbes to enter the body, or splashed of bacterial culture into the eye could result in infection.

This laboratory consisted of working with Hazard group 2 organisms within a containment level 2 laboratory. The hazard level is the level given to the organism which indicates how dangerous the organism could be. Hazard level 2 organisms can cause human disease and may be a hazard to employees although it is unlikely to spread to the community and there is usually effective prophylaxis or treatment available, examples include examples include Salmonella typhimurium, Clostridium tetani and Escherichia Coli.

Within containment level 2 laboratories there are many health and safety procedures to follow, below are examples of health and safety procedures set for containment level 2 laboratory:

• Protective eye equipment is necessary within the laboratory apart from when using microscopes

• There must be specified disinfection procedures in place

• Bench surfaces must be impervious to water, easy to clean and resistant to acids, alkalis, solvents and disinfectants.

• Laboratory procedures that give rise to infectious aerosols must be conducted in a microbiological safety cabinet, isolator or be otherwise suitably contained.

• When contamination is suspected, hands should immediately be decontaminated after handling infective materials and before leaving the laboratory.

• Laboratory coats which should be side or back fastening should be worn and removed when leaving the laboratory.

Within this laboratory glitter bug was applied to the hands and analysed under the light box. Glitter bug is a hand lotion that has a UV fluorescent glow. When place under UV light the glitter bug glows on the places where germs are located which cannot be seen to the human eye.

Loffler’s Methylene blue is a simple stain that was used to stain Saccharomyces cerevisiae.

This is a simple stain which is used for the analysis and understanding of bacterial morphology. It is a cationic dye which stains the cell blue in colour and can be used for the staining of gram-negative bacteria.

Results

Below are the results gathered from the glitter bug before washing our hands. The blue areas indicate where the glitter bug was most fluorescent under the light.

Introduction

Gram Stain

In microbiology one of the most common stains to carry out is the gram stain to understand and observe the differentiation between microbiological organisms. It is a differential stain which can differentiate between gram positive bacteria and gram negative bacteria. The gram-positive bacteria will stain purple/blue in colour and gram negative bacteria will stain red/pink in colour. The results indicating this differentiation can be seen within the variation of the arrangement, cell wall and cell shape structure.

The gram stain has many advantages such as it is very straightforward to partake in, it is cost effective and is one of the quickest methods used to determine and classify bacteria.

The gram stain is used to provide essential information regarding the type of organisms present directly from growth on culture plates or from clinical specimens. The stain is also used within the screening of sputum specimens to investigate acceptability for bacterial culture and could reveal the causative organisms within bacterial pneumonia. Alternatively, the gram stain can be used for the identification of the existence of microorganisms in sterile bodily fluids such as synovial fluid, cerebrospinal fluid and pleural fluid.

Spore stain

An endospore stain is also a differential stain which is used in visualizing bacterial endospores. The “production of endospores is an essential characteristic for some bacteria enabling them to become resistant within many detrimental environments such as extreme heat, radiation and chemical exposure. Spores contain storage materials and possess a relatively thick wall. Possession of a thick wall cannot be penetrated by normal stains either heat must be administered to allow the stains to penetrate the spore or the stain must be left for a longer period to allow penetration. The identification of endospores is very important within clinical microbiology within the understanding and analysis of a “patient’s body fluid of tissue as there are very few spore forming areas. There are two extensive pathogenic spore forming groups which are bacillus and clostridium, together resulting in a variety of different lethal disease such as tetanus, anthrax and botulism.”

The Bacillus species, Geobacillus species and Clostridium species all form endospores which develop within “the vegetative cell. These spores are immune to drying and have the purpose to survive. They develop in unfavourable conditions and are metabolically dormant and inactive until the conditions are favourable for the process of germination returning to their vegetative state.”

The Schaeffer Fulton method is a technique in which is “designed to isolate endospores through the process of staining. The Malachite green stain is soluble within the presence of water and has a small affinity for cellular material potentially resulting in the vegetative cells decolourising with water. Safranin is then applied to counterstain any of the cells which may have been decolourised. Resulting in the vegetative cells being pink in colour and the endospores being green.”

1. The bacteria that were used in this laboratory was Salmonella poona and Bacillus cereus. Both bacteria were identified as gram negative and are rod shaped cells. Other bacteria with identical shape characteristics as Salmonella poona and Bacillus cereus is Klebsiella pneumoniae which belongs to the Genus Klebsiella and the species K. pneumoniae. Another bacterium that has the same shape to those used in thos laboratory is Acinetobacter baumannii which belongs to the genus Acinetobacter and the species Acinetobacter.

2. The loop is sterilised within the Bunsen burner flame by placing the circular portion of the loop into the cold (blue) part of the flame and moving it up into the hot orange part of the flame until it is cherry red. If the loop is placed into the hot part of the flame first the material on the loop (including bacteria) might spurt out as an aerosol and some bacteria may not be destroyed. Once the loop is cherry red this indicates it is sterilised by incineration through dry heat and is then ready for immediate use. If the loop is then laid down or touched against anything it will need to be desterilised again however loops should never be laid on benches.

3. There are many possible problems that could affect a slide smear, for example excessive heat during fixation can result in altering the cell morphology making the cells much easier to decolourise. Another problem could be having a low concentration of crystal violet; this could result in stain cells which are easily decolourise. A third possible problem affecting the slide smear could be excessive washing between the steps as the crystal violet has the ability to wash out with the addition of water when exposed for too long. The last possibility which could affect a slide smear results is excessive counterstaining as it is a basic dye it is possible to replace the crystal violet-iodine complex within gram positive cells with an over exposure to the counter stain.

4. Hand hygiene is a necessity within the laboratories. It is the first line of defence and is considered the most crucial procedure from preventing the spread of hospital acquired infection.

The following steps is the appropriate hand washing technique:

• Wet hands with warm running water

• Enough soap must be applied to cover all surfaces

• Thoroughly wash all parts of the hands and fingers up to the wrist, rubbings hands together for at least 15 seconds

• Hands should then be rinsed under running water and dried thoroughly with paper towels

• Paper towels should be used to turn off taps before discarding the towels in the waste bin.

1. An example of a gram positive bacteria is Propionibacterium propionicus which belongs to the genus Propionibacterium and the species P.propionicus.

An example of gram negative bacteria is Yersinia enterocolitica which belongs to the genus Yersinia and the species Y. enterocolitica.

2. The gram stain has the ability to differentiate between gram positive and gram negative bacteria. Gram positive bacteria possess a thick layer of peptidoglycan within their cell walls but the lipid content of the cell wall is low resulting in small pores which are closed because the cell wall proteins are dehydrated from the alcohol resulting in the CV-I complex being retained within the cells which remain blue/purple. However Gram negative bacteria possess a thinner peptidoglycan wall and a high volume of lipid within their cell walls resulting in large pores that remain open when acetone-alcohol is added. “The CV-I complex is then lost through these large pores. The gram-negative bacteria then appear colourless. Once the counterstain is applied to the bacteria the cells turn pink. This is due to the counterstain entering the cells through the large pores in the wall.”

3. There are many problems which could arise during the production of a bacterial smear. These include having a dirty slide which is greasy or perhaps coated with dirt and dust. Having this will result in unreliable results due to the smear containing the desired microbes washing off the slide during the staining process or when the bacterial suspension is placed on the microscope slide it will not spread out evenly. Another possible problem could be having a smear that is too thick which results in too many cells being on the slide and the penetration of the microscope light through the smear is poor. However, if the smear is too thin then seeking for the bacteria cells is time-consuming.

Germination is also a complex process and is normally triggered by the presence of nutrients (although high temperatures are also sometimes required to break the dormancy of the spore). The events during germination include:

♣ Swelling of the spore

♣ Rupture or absorption of spore coat(s)

♣ Loss of resistance to environmental stresses

♣ Release of the spore components

♣ Return to metabolically active state

Outgrowth of the spore occurs when the protoplast emerges from the remains of the spore coats and develops into a vegetative bacterial cell.

Introduction

The human body and the environment both consist of a vast number and variety of bacteria that are within mixed populations such as within the gut and soil. The bacteria being mixed with such a different variety of populations must be separated in pure culture to investigate and diagnose the identify of each bacterium. The aim of pure culture for bacteria requires that the number or organisms present is decreased until single, isolated colonies are obtained. This can be accomplished through the process of successful streak plate technique or through liquid culture dilutions on a spread plate.

The streak plate technique is used to analyse the purity of cultures that must be managed over long lengths of time. Contamination “by other microbes can be seen through the process of regularly sampling and streaking. The streak plate technique is used in several different aspects such as expert practitioner to begin a new maintained culture through selecting an appropriate isolated colony of an identifiable species with a sterile loop and then going on to grow those cells in a nutrient” broth.

When bacteria in a mixed population are streaked onto a general-purpose medium for example nutrient agar this results in the production of single, isolated colonies however the morphology of the colony does not indicate immediate, reliable means of identification. In practice, microbiologists use differential and selective media in the early stages of separation and provisional identification of bacteria before sub culturing the organisms to a fitting general purpose medium. The identity of the sub-cultured organisms can then be approved using a range of suitable tests.

Selective and differential media are used for the isolation of identification of particular organisms. A variety of selective and differential media are used within medical diagnostics, water pollution laboratorie and food and dairy laboratories.

Differential media normally contain a substrate that can be broken down (metabolised) by bacterial enzymes. The effects of the enzyme can then be observed visually in the medium. Differential media may possibly contain a carbohydrate for example glucose or lactose as the substrate.

Selective media are media that consist of one or more antimicrobial chemicals these could be salts, dyes or antibiotics. The anti-microbial chemicals can select out the specific bacteria while inhibiting the growth and development of other unwanted organisms.

Cysteine lactose electrolyte deficient agar (CLED) is a differential culture medium which is used in the isolation of gut and urinary pathogens including Salmonella, Escherichia coli and Proteus species. CLED Agar sustains the growth and development of a variety of different contaminants such as diphtheroids, lactobacilli, and micrococci.

CLED can be used to differentiate between naturally occurring gut organisms e.g. E.coli and gut pathogens e.g. Salmonella poona in a sample of faeces. There are many advantages of using CLED agar for urine culture, one being that CLED agar is a good discrimination of gram negative bacteria through the process of lactose fermentation and on the appearance of the colonies. Another advantage of using CLED Agar is it is very cost effective and also it inhibits the gathering of Proteus spp which is frequently involved in urinary tract infections.

CLED also possesses lactose as a substrate and a dye names Bromothymol Blue which demonstrates changes in pH. The pH of CLED plates neutral resulting in the plates being pale green in colour. Bacteria such as E. coli that produce the enzyme β galactosidase break down lactose by fermentation to produce a mixture of lactic and formic acid for the pH to become acidic. The colonies and medium then transform into a yellow colour which indicates lactose positive. Lactose negative bacteria cannot ferment lactose due to not possessing the ability to produce β galactosidase resulting in pale colonies on CLED.

MacConkey Agar (MAC) is a selective medium due to the presence of bile salts and crystal violet which inhibits most gram-positive cocci. The bile salts and crystal violet encourage the growth and development of gram positive organism with lactose providing a source of fermentable carbohydrate. MacConkey is designed to isolate and differentiate enterics based on their ability to ferment lactose. Bile salts and crystal violet inhibit the growth of Gram positive organisms. Lactose provides a source of fermentable carbohydrate, allowing for differentiation.  Neutral red is a pH indicator that turns red at a pH below 6.8 and is colourless at any pH greater than 6.8.

Organisms that ferment lactose and thereby produce an acidic environment will appear pink because of the neutral red turning red.  Bile salts may also precipitate out of the media surrounding the growth of fermenters because of the change in pH.  Non-fermenters will produce normally-colored or colourless colonies. In MacConkey agar, the substrate is lactose which is fermented by lactose positive bacteria e.g. E. coli to lactic acid and formic acid resulting in the medium being acidic. The dye neutral red then changes colour and colonies of E. coli are now violet red. Lactose negative bacteria are describes as possessing pale colonies and therefore mac can be used to select out and differentiate between naturally occurring gut organisms and gut pathogens.

1. Figure 13 elicits how the majority of the colonies used in laboratory 3 took up the entire colony edge and where flat in the elevation of the colonies. The streak plate method can obtain single colonies through firstly streaking the portion of the agar plate with an inoculum and then streaking successive areas of the plate to dilute the original inoculum so that single colony forming units (CFUs) will give ruse to isolated colonies.

2. Potential problems that could lead to the production of unsuccessful plates or slants could be that when sterilising the loop, it was placed in the inner blue flame and not given time to cool down being instantly placed directly into the plate, killing all bacteria within the plate. Another problem could be insufficient flaming between the quadrants leading to the loop not being sterile leading to contamination of organisms.

3. A Bacterial cell is a microscopic single- celled organism which thrive in diverse environments. A Bacterial colony is a discrete accumulation of a significantly large number of bacteria, usually occurring as a clone of a single organism or of a small number.

4. Refer to Figure 15 and 16

5. Cled it is a solid medium used in the isolation of gut and urinary pathogens including Salmonella, Escherichia coli and Proteus species. CLED contains lactose as a substrate and a dye called Bromothymol blue which indicates the changes in pH. Prior to inoculation, plates of cled are pale green in colour. This is due to the pH of the plates being neutral. Bacteria such as E. coli that produce the enzyme β Galactosidase break down lactose through the process of fermentation to produce a mixture of lactic and Formic acid so that the pH is acidic. Resulting in colonies and medium turning yellow (lactose positive). Lactose negative bacteria e.g. Salmonella poona is unable to ferment lactose due to them not being able to produce β Galactosidase and usually producing pale colonies on cled. Cled can therefore be used to differentiate between naturally-occurring gut organisms.

6. From the results, we can conclude that the bacterium that fermented lactose was Escherichia. Coli and the none fermenting bacterium was Salmonella poona.

7. Mannitol Salt Agar (MSA) is utilised as a selective and differential medium in the process of isolating and identifying Staphylococcus aureus from clinical and non clinical specimens. Mannitol Salt Agar contains the carbohydrate mannitol, 7.5% sodium chloride and the pH indicator phenol red. Phenol red is yellow below p.H 6.8, red at pH 7.4 to 8.4 and pink at 8.4. the sodium chloride makes this medium selective for staphylococci since due to most bacteria not being able to live in such levels of salinity.

The pathogenic species of staphylococcus ferment mannitol and thus produce acid. This acid then turns the pH indicator to a yellow colour. Non-pathogenic staphylococcal species grow however there is no colour changed produced.

The formation of yellow halos surrounding the bacterial growth is the predicted evidence that the organism is a pathogenic Staphylococcus. Significant growth that produces no colour change is the presumed evidence for none pathogenic Staphylococcus. Those staphylococci that do not ferment mannitol produce a purple or red halo around the colonies.

A viable count

A viable count is a method for estimating the number of bacteria cells in a specific volume of concentration. The method relies on the bacteria growing a colony on a nutrient medium. The colonies then become visible to the naked eye and can then be counted. For accurate results the total number of colonies must be between 3-300. Fewer than 30 indicate the results are not stastically valid and are unreliable. more than 300 colonies often indicate an overlap in colonies and imprecision in the count. To establish that there is an appropriate final figure for the total colony count several dilations are normally cultured. The viable count method is used by microbiologists when undergoing examination of bacterial contamination of food and water to ensure that they are suitable for human consumption.

Serial Dilution

A serial dilution is the process of consecutive dilutions which are used to reduce a dense culture of cells to a more applicable concentration. within each dilution the concentration of bacteria is reduced by a certain amount. through calculating the total dilution over the entire series the number of initial bacteria can be calculated. After the dilution of the sample an estimation of the number of bacteria visible is carried out using the surface plate count known as the spread plate technique and the pour plate technique. Once incubated the colonies are then counted and an average is calculated. The number of viable bacteria per ml or per gram of the original sample is also calculated however this is calculated on the principle that one visable colony is the direct result of the growth of one single organism. Nonetheless bacteria has the capability of clumpimg together and this could result in a colony being produced from a clump. For that reason counts are expressed as colony forming units (cfu) per ml or per gram as this gives the explanantion as to why counts are estimations.

Spread plate technique

The spread plate technique is used for viable plate counts for when the total number of colony forming units on a single plate is counted. There are many reasons as to why the spread plate technique is so useful within microbiology for example is can be used to calculate the concentration of cells in a tube from which the sample was initially plated. The spread plate technique is also routinely used in enrichment, selection and screening of experiments. However, there are some disadvantages for when using this technique such as crowding of the bacterial colonies could make the enumeration much more challenging.

Pour plate technique

The pour plate method is used in the counting of the coloy-forming bacteira present in a liquid form. The pour plate has many advantages fo example it allows the growth and quanitifation of microaerophiles as there is little oxygen within the surface of the agar, and identification of anaerobes, aerobe or facultive aerobes is much easier as they have the ability to frow within the media. Howver there are a feew disadvatges in using the pour plate technique for example the temperature of the medium needs ti be tightly regulated. If the temperature is too warm the mirocrogansims will die and if the temperature is too cold the agar will clump together which can sometimes be mistaken for colonies.

Introduction

In microbiology understanding the characteristics that bacteria possess is critical to the knowledge and understanding of microbiology. To enable a full understanding of the characteristics bacteria possess they undergo simple tests named primary tests which can be used to establish if the cells are gram negative or gram negative cells, if the cells are rods or cocci shape and if the bacteria is catalase positive or catalase negative.

The catalase test is a primary test which is used in the detection of catalase enzymes through the decomposition of hydrogen peroxide resulting in the release of oxygen and water as demonstrated by the equation below:

2 H2O2→2 H2O + O2

Hydrogen peroxide is produced through various bacteria as an oxidative product of the aerobic breakdown of sugars. However, it is highly toxic to bacteria and could lead to cell death. The catalase test serves many purposes such as differentiating between the morphologically similar Enterococcusor Streptococcus which is catalase negative and Staphylococcus which is catalase positive. The test is also valuable within differentiating between an aerobic and obligate anaerobic bacterium and can be used as an aid within the identification of Enterobacteriaceae.

The oxidase test is also an example of a biochemical primary test which is used in the identification of if bacteria produce cytochrome c-oxidase which is an enzyme of the bacterial electron transport chain.

Oxidase positive bacteria possess cytochrome oxidase or indophenol oxidase which both catalyse the transport of electrons from donor compounds such as NADH to electron acceptors which is usually oxygen. If present, the cytochrome c oxidizes the reagent (tetramethyl-p-phenylenediamine) to (indophenols) producing a purple color as the end product. When the enzyme is not present, the reagent remains reduced and is colourless.

Organisms which are known as oxidase positive are- Pseudomonas, Vibrio, Brucella, Pasturella, and Kingella. Organisms which are oxidase negative are Acinetobacter, Staphylococci, Streptococci and all Enterobacteriaceae.

Primary tests are helpful in the understanding of the initial characteristics bacteria possess. However more advanced methods may be used to finalise the identification to the level of Genus and Species to enable treatment for patients and to enable appropriate action to be taken to prevent any further transmission of infection. Laboratories today now rely on rapid id kits which analyse the biochemical aspects of bacteria and this is known as bio typing.

Rapid identification kits are used for the identification and differentiation of different bacteria. The ID32E is commonly used in the identification of members of the Enterobacteriaceae. There are two types of kits one is IDSTAPH which us used in the identification of members of the staphlycococcaceae while the IDSTREP strip is used in the identification of streptooccaceae. The kits consist of wells which contain dried substrates such as sugars or amino acids. These dried substrates are then reconstituted through the addition of saline suspension of bacteria. The results are then read on a computer profile which is linked to an identification software. From then the genera and species can be analysed and differentiated from each other.

2018-4-19-1524112621

The genocide of Darfur: college essay help

How would you feel to be without a home, family, and basic needs? What about having to struggle everyday just to live your life? If that is not bad enough, imagine being in a constant state of danger. The genocide of Darfur is rooted in decades of conflict and has lasting effects on the community that have resulted in an unstable environment. “The Sudanese armed forces and Sudanese government-backed militia known as Janjaweed have been fighting two rebel groups in Darfur, the Sudanese Liberation Army/Movement (SLA/SLM) and the Justice and Equality Movement (JEM),”(www.international.ucla).

The first civil war ended in 1972 but broke out again in 1983. This is what really initiated the genocide. However, the genocide escalated and was credited for starting in February of 2003. It was considered to be the first genocide of the 21st century. “The terrible genocide began after rebels, led mainly by non-Arab Muslim sedentary tribes, including the Fur and Zaghawa, from the region, rose against the government.”(www.jww). “This genocide is the current mass slaughter and rape of Darfuri men, women, and children in western sudan.” (www.worldwithoutgenocide). Unrest and violence continue today. The group that is carrying out the genocide is the Janjaweed . They have destroyed those in Darfur by “burning villages, looting economic resources, polluting water sources, and murdering, raping, and torturing civilians.”(www.worldwithoutgenocide).

Believe it or not, this genocide is still going on today. As a result, Darfur is now facing very great long term challenges and will never be the same. There are millions of displaced people who depend on refugee camps. However, at this point these camps are not much a source of refuge, but more so a danger themselves. The cause of this is severe overcrowding. (3). It is often unsafe for anyone to leave the camps. Women who would normally go in search of firewood cannot anymore because they may end up being attacked and raped by the Janjaweed militias (www.hmd). The statistics of this genocide show how bad it really is. Since 2003 when it began, it has accumulated over 360,000 Darfur refugees in Chad, been the cause of death for over 400,000 people, and has affected 3 million people in some way. (www.jjw) On top of that, more than 2.8 million people have been displaced (www.worldwithoutgenocide). An interview that was taken suggests that 61% of the respondents had witnessed the killing of one of their family members. In addition, 400 of Darfur’s villages have been wiped out and completely destroyed (www.borgenproject). To prove that this is a real problem, here is a personal experience. “Agnes Oswaha grew up as part of the ethnic Christian minority in Sudan’s volatile capital of Khartoum. In 1998, Agnes immigrated to the United States, specifically to Seattle. She has now become an outspoken advocate for action against the atrocities occurring in Darfur” (www.holocaustcenterseattle). Agnes has used her struggles to inspire others. She is a prime example that you can make something good out of something so devastating and wrong.

There are many help groups that are working to inform people about this problem. The two that I am going to highlight are the Darfur Women Action Group (DWAG) and the Save Darfur Coalition. The first group, or the Darfur Women Action Group (DWAG) is an anti-atrocities nonprofit organization that is led by women. They envision a world with justice for all, equal rights, and the respect for human dignity. They provide the people of Darfur with access to tools that will allow them to oppose violence. This group also addresses massive human rights abuses in their societies and works with others to prevent future atrocities. They do this all while promoting global peace. Along with that, they ask us to speak out and spread the word. Their ultimate goal is to bring this horrific situation to the attention of the world to end it for good (www.darfurwomenaction). The next help group is the Save Darfur Coalition. They have helped develop strategies and advocate for diplomacy to encourage peace.They have also helped conquer the deployment of peacekeeping forces in Darfur. Because of them, there have been billions of dollars in U.S. funding for humanitarian support. Violence against women has been used as a weapon of genocide and because of them, the awareness in Congress of this issue has grown (www.pbs).

As Americans, we can do many things to stop this issue. First, we must put aside domestic politics and help those even if they are not a part of our country. The growing genocide in Darfur is not a partisan issue but one that strecthes across a wide variety of constituencies, or bodys of voters and supporters. Some of these include religious, human rights, humanitarian, medical, and legal communities. All of these, and some others, are advocating a pugnacious worldwide response to the crisis (www.wagingpeace).

The genocide of Darfur is atrocious. It is rooted in decades of conflict and has lasting effects on the community of Darfur. This conflict has resulted in an unstable environment for all those who belong to the country of Sudan. It has made normal people live in fear every day. Millions of people are affected, and 2.8 million displaced people. Also, 400,000 innocent people have been killed. This is all because of the actions of the Janjaweed gang. This genocide is an overall horrendous thing that is actually going on in the world around us. There is much that can be done to help, but can we, being in the good situations that we are in, take time out of our own lives to think about those who really need our help? Do we care enough to spend time and money on people we don’t even know? If we choose to do so, we could be making a huge difference in the lives of people. Even though they might live across the world from us and live very different lives than us, they are very similar to us in many ways.

2021-12-4-1638659036

Status of income groups and housing indicators: essay help

1. Introduction

Buying a house is often the biggest deal for a family in its lifetime. Furthermore, economic, social, and physical properties of the neighborhoods have short term and long term impacts on the residents’ physical and psychological status (Ellen et al., 1997). Accordingly, inappropriate housing would bring about many health risks, in a way that it would inflict adults, as well as children, with a variety of mental and physical disorders (Bratt, 2000; Kreiger & Higgens, 2002). Instable housing conditions, moreover, lead to stress and thus have manifold negative impacts on people’s education and professions (Rothstein, 2000). Despite the importance of housing in human life, the provision of adequate and affordable housing for all people is one of the current problems of human society, since almost half of the world’s population lives in poverty and about 600 million to 800 million people are residing in sub-standard houses (Datta & Jones, 2001). Despite poor housing in developing countries, there are no organizations and institutions in order to supply services and organize institutional developments so as to strengthen different classes of the society (Anzorena, 1993; Arrossi et al., 1994). For example, 15% of people in Lagos, 51% in Delhi, 75% in Nairobi, and 85% in Lahore live in substandard housing. It has been estimated that thousands of low-income residents do not use healthy water transported from pipes and thus are pushed to use infected or substandard water (Hardoy, Mitlin & Satterthwaite, 2001). For instance, 33% of people in Bangkok and 5 million in Kolkata don not have access to healthy water and 95% of people in Khartoum live without sewage system. According to a report by the World Health Organization, the probability of death in children who live in substandard settlements is 40 to 50 percent more than the children in European and North-American children (Benton-short and Short, 2008). That is because where they live lacks security and essential infrastructures and facilities like water, electricity and sewage; in addition, they are also vulnerable to numerous risks (Brunn, Williams and Ziegler, 2003). In 2005, about 30 environmental disasters led to a death toll of almost 90 thousand people, a majority of whom were from poor countries and low-income people (Chafe, 2006).

Planning in the housing sector in Iran lacks an efficient statistical system. Despite the paradoxes, gaps and inconsistencies in the data and statistics from the housing sector, reaching to a comprehensive and clear plan to address the problems of this sector is almost beyond any possibility. Lack of integrity among organizations responsible for collecting and arranging housing index information (Statistical Center of Iran, the Central Bank, Ministry of Housing and Urban Development, municipalities, etc.) should be considered as a serious problem. Aiming at evaluating the status of income groups and housing indicators — such as the average level of infrastructures, the average level of income, etc. — in existing deciles, the present study, therefore, has estimated housing demand and evaluated the financial power of low-income groups in the city of Isfahan so as to apply the given results in accurate planning of housing for the low-income groups in the city if Isfahan.

2. Theoretical framework

Housing is the smallest component of accommodations and is the concrete representation of development. According to Williams (2000), cities embracing social justice are those cities that have the greater share of high-density housing and provide services and facilities. Rappaport (1969) maintains that the factors of culture and human understanding of the universe, together with life, have played a crucial role in housing and its spatial divisions. According to Le Corbusier’s viewpoint, a house must response to both physical and spiritual needs of people (Yagi,1987). Housing is the basic environment of family, and it is a safe place to rest away from the routines of work and school, and is a private space. According to Fletcher, home is a paradoxical ground of both tenderness and violence. Gaston Bachelard in the book, The Poetics of Space, has called home as an ”atmosphere of happiness”, wherein rest, self-discovery, relaxation and maternity becomes important. According to Short (2006), housing is the nodal point of all dualities and paradoxes. Housing and housing planning has been analyzed from different perspectives. Development and growth pole theory put the acute housing problems as something transitional and as parts of development programs (Shefa’at, 2006). On the contrary, the theory of dependency counter-urbanization theories have recognized inequality and the one-sided product distribution from margin to the center as the main reason for housing deterioration (Athari, 2003). From an economic viewpoint of the market, housing issues should be left to the market mechanism (Dojkam, 1994), and housing needs of the market system should be provided by the private sector (Seifaldini, 1994). The government should also avoid spending funds for low-income housing (Chadwick, 1987). Urban management approach, which, from the point of view of political economy, is a very important orientation, believes that wider social and economic contexts play a role in the formation of urban residential patterns. One of the most important parts of urban planning is the planning of housing development; economic factors such as cost of living, employment bases and instability of income play a very important role in the housing planning. Beside the economic factor, architectural style is the most determining factor in housing planning. Regional indigenous languages, stylistic trends, weather, geography, local customs, and other factors influence the development of housing planning and housing design. The five characteristics of housing are: the type of building, style, density, the size of project, and location (Sendich, 2006). Housing planning should be designed in such a way that in addition to adequate housing, basic ecological variables also to be included in (Inanloo, 2001). Governments often do housing planning in three categories of national, regional and municipal levels so as to be able to employ it as a technique to solve the housing problems of its citizens (Ziari & Dehghan, 2003). The fundamental goal of housing planning in national level is to balance the housing supply and demand regarding its position in macroeconomics (Sadeghi, 2003). In regional planning of housing, supply and demand are evaluated in regional level and the aim is to balance them. The difference between housing planning in regional level and national level is that in regional level the relation between housing and macroeconomics is not considered, but it emphasizes on the economic potentials inside the regions (Zebardast, 2003). Local planning of housing is conducted in three scales of town, city and urban areas. Housing planning can be approached in two different ways. The first approach is the distribution of the goals and credits of national and regional plans to smaller geographic units of region, city or town. The second approach is to investigate the housing status in local levels and to estimate the needed land for future housing development and suitable differentiation of the lands (Tofigh, 2003). The other approach is concerned with the low-income housing and presents three programs: 1. Programs that provide subsidies for rental housing, either as individual or in complexes; 2. Providing tax credits that result in the production of low-rent housing units; 3. Supportive programs for affordable housing of the lower classes (Mills et al, 2006). Such a policy is along with tools such as tax-deductibility, long-term loans, insurance and so on and so forth. The UN addresses the housing of those in need through the Commission on Human Settlements in the form of Habitat Program. In 1986, the UN codified the Global Shelter Strategy for the Homeless until the year 2000 with an empowerment approach. In 1992, there were addressed in Habitat 2 the security of the right for housing, particularly for the low incomes. In 2001, in the special session of the UN General Assembly in New York, the need to address the issue of urban poverty and homelessness received serious considerations.

3. Methodology

This study is a basic-applied piece of research adopting a descriptive-analytical methodology. The geographic area of the research is the political and administrative area of the city of Isfahan in 2014. Variables of the research are the income deciles, housing quantity developments, land and housing prices, the system of housing finance, housing status in the expenditure basket of the low-income households, the Gini coefficient of housing costs, the effective demand for housing in the income deciles considering the area of infrastructure and the access to housing index. The Statistical Centre of Iran provided the statistics and the city of Isfahan provided the cost/income scheme. Methods used include statistical techniques of the population deciles and for the financial power of the groups there has been used the indirect method function.

4. Results

4.1. Changes in the quality of housing in Isfahan

According to the statistics, the population and urban growth had a great growth in decade from 1996 to 2006. The average of population growth in the years from 1996 to 2006 was 1.37%, the growth of the households was 52.3% and the growth of housing was 7.3% (Table 1).

According to the 2011 census, the population was about 1,908,968 people residing in 602,198 households, hence the size of each household is 3.17 individuals. However, the number of housing units constructed in the same year for 602,198 households in the city of Isfahan was not standard and there was a housing shortage to the level of 0.193. But with the growth in the number of housing units in 2011, it can be stated that the 215000-unit policy of the Mehr Housing Project has had a large impact on the number of housing units of the city of Isfahan (see Table 1).

Table (1): Changes in the quantity of housing in the city of Isfahan compared to its population changes (period 1996-2011)

Period

Population number of households size of the household Number of typical residential units Proportion of households to residential units Population growth rate Urbanization rate in percent Shortage of housing regarding the households Percent of housing shortage to the existing ones

1996 1310659 326581 4.03 325225 1.239 1.51 74 1356 0.416

2006 1642996 466760 3.52 458852 7.541 1.37 83 7908 1.723

2011 1908968 602198 3.17 601035 5.264 1.3 85 1163 0.193

Source: the researcher’s calculations, 2014

4.2. Evaluation of the price changes of land and housing in the city of Isfahan

The value of properties (land, housing, and rent prices) is one of the main factors in determining the quality and quantity of people’s housing; while housing and land become the playground of one’s capital (as in today’s Iran), the tendency to have private housing increase and this leads to an increase in demand. Considering the instable status and the risks following certain other investment areas (such as manufacturing and agriculture, etc.), the tendency towards investing in housing sector has always been a safer investment and this has led to increase in the prices and widen the gap between the effective demand and the potential demand.

During the evaluation, we encounter huge fluctuation in the value of housing lands, both dilapidated and new, and an increase in the housing rent prices, as has been presented in detail in Table 2.

Table (2): changes in prices of land, housing, and rents, 2003-2013

Year A square meter of dilapidated residential building A square meter of residential units Rent prices for one square meter

Price (a thousand rials) Annual growth rate Price (a thousand rials) Annual growth rate Price (a thousand rials) Annual growth rate

2003 2632 – 3007 – 13034 –

2004 3562 35.3 3373 10.8 13177 1.08

2005 4035 11.7 4251 20.6 14582 9.6

2006 3839 -5.1 4702 9.5 16313 10.6

2007 5706 32.7 8181 42.5 20975 22.2

2008 7278 21.5 8485 3.5 24600 17.5

2009 4929 -47.6 8211 -3.3 25195 18.2

2010 4612 -6.8 8676 5.3 28333 11.07

2011 4978 7.3 9549 9.1 30075 5.7

2012 6332 10.2 12385 22.8 35809 16.01

2013 8571 26.1 16624 25.4 45261 20.8

Source: Statistical Center of Iran, Statistical Yearbook of Isfahan Province, 2003-2004; and author’s calculations, 2014

Considering the price per square meter of housing units in the city of Isfahan, we see that the market of such land had a great fluctuation between the years of 2004 and 2006 and immediately fell in 2007. Bu the most fluctuation was in the year 2009, when the price of dilapidated units had a decrease of -47.6% and this decrease continued in the following year and it then found an increase process. The outset of the Mehr Housing Project, the economic downturn in the investing countries (foreign participation sharply declined), and political issues can nevertheless be assumed as the main reasons of the price drop in the years between 2009 and 2010.

However, during the period, the price of housing units per square meter was with a greater stability, in a way that except the year 2009, when a 3.3% negative growth was experienced, there experienced no more descending in prices. One of the characteristics of the housing market during the research period was the increase in the prices. Evaluation of the inflation indexes, the prices of manufacturing factors, and housing prices show that the housing market in Isfahan has not been mainly exempted from self-seeking interventions and the bulk of the increase in housing prices were resulted from rising prices of production factors and the overall inflation.

An important issue here is that the difference between the rent prices and housing prices was its continuous increase during the years under study.

Figure 1: changes in the prices of housing, land and rent during the period 2003-2013

Source: Statistical Center of Iran and author’s calculations, 2014

4.3. Study of the changes in the production of housing in the city of Isfahan

Increase in land prices, as one of the most important components of the production of housing, has brought about the grounds for a decrease in the production of housing on the one hand, and an increase in building density factor on the other hand. The cost of housing in the city of Isfahan per square meter is another indicator that can be objectively applied in the analysis of access to adequate housing. Of course, the cost of housing will increase over time, but the process of the increase is of importance. At the beginning of the period under study the cost of making one meter square of housing in the city of Isfahan was 430,000 IRR, while this amount was 3,102,000 IRR at the end of the period. Although changes in housing investment, in short-term, are affected by the changes in the factors affecting the demand, such as housing prices and the granting of loans, the long-term factors such as land prices, construction costs and inflation also affect these changes. The cost of housing production during the period has had always an upward trend, but the acceleration of this growth has been different. Much of the increase in the construction cost was due to the rising prices of the land, materials, and labor. Some have been affected by the decline in productivity.

Table 3: Changes in the cost of housing construction per square meter, 2001-2012 (one thousand IRR)

Year

index 2001 2006 2008 2009 2010 2011 2012

The cost of one square meter construction 430 1308 1542 2030 2441 2582 3102

Percent of growth comparing to the previous year – 204 17.8 31.6 20.2 5.7 20.1

Source: Statistical Center of Iran, the Central Bank of the Islamic Republic of Iran, 2001-2012 and author’s calculations

Chart 2: Changes in the cost of housing construction per square meter 2001-2012 (percent)

4.4. Evaluation of housing finance system in the city of Isfahan

4.4.1. Household savings

In the area of private housing, the greatest and most reliable source in providing housing is a household’s savings. Household’s savings are part of their disposable income that are not used for consumes of the family and are used for income purposes. In this study, to achieve cost savings level there has been used the difference and household income is the difference between the cost and revenue (Table 4).

Table (4): the average of income, cost, and savings of an urban household over the years 2006-2012 (IRR)

Year Price Income Savings

2006 69059825 57289929 -11769896

2009 88508931 74529939 -13978992

2009 101319582 87730581 -13589001

2010 114495202 93390015 -21105187

2011 137279114 109217181 -28061933

2012 157761405 145924872 -11836533

Source: Statistical Yearbook of Isfahan Province and author’s calculations, 2014

During the study period, household savings have been negative for most of the period. Accordingly, assuming the consumption pattern of households to be constant, households’ savings cannot be a source of funding for housing. But it is possible that by changing the consumption patterns and lifestyles, and by increasing the savings, it can be insisted on as a source of housing.

4.4.2. Bank credits

Credit and banking facilities are financing or guaranteeing the obligations of the applicants based on the interest-free banking law, where the methods of banking facilities include: loan, civic management, legal management, direct investment, partnership, forwards, sales installment, hire-purchase, contract of farm letting .

Table (5): the amount of the grants by the banks of Isfahan Province to private sector based upon major economic sectors, 2001-2012

Index 2001 2006 2008 2009 2010 2011 2012

The number of facilities and bank credit 170128 384418 262110 160060 308439 567426 328695

Percentage of total housing per year 28.67 35.21 16.5 9.49 12.18 21.45 12

Source: Statistical Yearbook of Isfahan Province and author’s calculations, 2014

In total, the annual average of 20 percent of bank credits has been allocated to the housing sector which is a substantial amount. Therefore, considering the financial sources and potentials and absorption rate of bank deposits by the banks of the province, 20% of them can be considered as sources of investment in the housing sector.

4.4.3. Bank development credits

Government’s development credits are the budget allotted annually and based on annual budget rules for implementing development plans and for expanding current expenditure on the government’s economic and social plans, nationally and provincially. This budget is divided into the three general, economic and social categories.

Table (6): Government credits as divided by budget seasons (2006-2012)

Year Sum General affairs Social affairs Economic affairs

Credit percentage Credit percentage Credit percentage

2006 8928397 6947286 77.8 556603 6.2 1424508 15.9

2008 5461355 1175093 21.5 1250856 22.9 3035406 55.5

2009 3177898 1866020 58.7 544190 17.1 767688 24.1

2010 3996923 2074964 51.9 700944 17.5 1221015 30.5

2011 4031126 2338934 58.01 605371 15.01 1086821 26.9

2012 2888659 2304195 79.7 161633 5.5 422831 14.6

Source: Statistical Yearbook of Isfahan Province and the author’s calculations, 2014

Table (7): Share of credits of the housing sector in social affairs program 2006-2012 (million rials)

Year Sum of credits Share of credits of the economic sector in the total credits Share of credits of the housing sector from the economic credits Share of credits of the housing sector in the total credits

Credit Percentage Credit Percentage Percentage

2006 8928397 1424508 15.9 513921 36/07 5.7

2008 5461355 3035406 55.5 675548 22.2 12.3

2009 3177898 767688 24.1 207621 27.04 6.5

2010 3996923 1221015 30.5 419433 10.4 10.4

2011 4031126 1086821 26.9 419154 38.5 10.3

2012 2888659 422831 14.6 126148 29.8 3.4

Source: Statistical Yearbook of Isfahan Province and the author’s calculations, 2014

Considering the results, it can be argued that the credit of the sector has varied over years, and the these changes ranges from 5 to 12 percent of the entire budget, which is an important figure in its own right.

Of course, this might be due to the role of government and private sector in housing investment. Although in the recent years the government’s role has become more serious with the emergence of plans such as Mehr Housing, retroffiting plan, and renovation of distressed areas. (Of course, one must notice that as of 2006 the economic sector has always received the highest credits. And, in the economic sector, the housing, and urban and rural sectors received the greatest amounts of credits).

4.4.4. Determining the position of housing in the expenditure basket of low-income households of Isfahan city

To investigate housing costs in each of the income groups, first the study households were ordered in descending order during the years 2005-2011based on the amount of income. Afterwards, households were divided into equal groups (indicating Deciles). In the next step, based on the data regarding housing costs and the entire food and non-food expenditure of each household, the amount of housing costs and the entire food and non-food expenditure of each of the income Deciles was calculated for different years. Finally, the mean income and housing costs and the total food and non-food costs of households in each Decile were investigated. In order to present authentic and real analyses, all variables were calculated considering the ratio of price index in the city of Isfahan to the fixed price of the year 2005.

Table (8): Mean housing cost of low-income household of Isfahan City, 2005-2011.

Year Mean cost Proportion of growth compared to previous year

2005 14124382 –

2006 17433765 18.9

2007 21248483 17.9

2008 26704381 20.4

2009 26390670 -1.18

2010 28886146 8.6

2011 33238345 13.09

Source: Statistical Center of Iran, Plan for household cost and income of Isfahan city, 2011.

As displayed in Table 8, housing costs for Isfahan city have an increasing trend (except for the year 2009). The highest increase in housing costs took place in 2008 (.4) and the lowest increase occurred in 2010 (%8.6).

In this regard, it should be noted that the government’s policies to establish stability and regulate market prices and prevent unduly increase have been very effective, such that the rate of price increase in relation to the previous year has been usually in the same price range. However, to present precise results of estimation of housing costs of low-income groups in Isfahan city, the results were analyzed in income Deciles (Table 9).

Table (9): Variation in the mean housing cost of urban households of Isfahan city, 2007-2011

Year Decile 1 Decile 2 Decile 3 Decile 4 Decile 5 Decile 6 Decile 7 Decile 8 Decile 9 Decile 10

2007-2011 25.14 28.2 28.9 25.7 32.5 33.68 33.3 35.58 34.33 36.1

Source: Statistical Center of Iran, Plan for household cost and income of Isfahan city, 2011

As can be seen, enforcement of various economic policies in the housing sector and establishment of a balance between housing supply and demand in Isfahan city during the study period has been such that the average housing costs for high-income households was higher than low-income households. The highest increase belonged to the 10th Decile (.1) and the lowest increase belonged to the 4th Decile (.7).

In the following sections, to present precise results, the share of housing costs in the total costs of urban households will be analyzed in income Deciles (Table 10).

Table (10): Share of housing costs in the total costs of urban households of Isfahan, 2007-2011.

Year Decile 1 Decile 2 Decile 3 Decile 4 Decile 5 Decile 6 Decile 7 Decile 8 Decile 9 Decile 10

2007-2011 47.4 46.49 44.15 42.2 40.38 38.6 35.56 34.33 33.9 31.5

Source: Statistical Center of Iran, Plan for household cost and income of Isfahan city, 2011

According to the results, in general, the share of housing costs in the high-income households (31.5), the share of housing costs is lower than that in the low-income households (47.4). Put otherwise, Isfahan urban households belonging to the lower Deciles of the society spend a great proportion of their income (food and non-food expenses) on housing costs. In contrast, households of upper Deciles devoted a smaller proportion of their income on housing costs. Hence, in the upper income Deciles, the share of housing costs compared to the total costs of households is lower than that in households of the lower Deciles of Isfahan city.

Diagram (3): Share of housing costs in the total costs of urban households in Isfahan city within the framework of income Deciles 2007-2011

4.4.5. Estimation of Gini coefficient of housing costs of households of Isfahan city

This index id usually used to investigate class difference and income distribution among the society’s Deciles. Therefore, the closes the value to 1, the less unequally distributed of the index, and the closer it is to zero, the more equally distributed the index.

Here, the Abunoori equation is used to calculate Gini coefficient.

Where ‘y’ stands for the upper limit of expenditure groups, f(y) is the relative cumulative frequency of households with expenditure up to ‘y’, and ‘u’ stands for regression error. Table 11 presents the values of Gini coefficient for the housing costs of households of Isfahan city.

Table (11): Gini coefficient of housing costs of urban households in Isfahan city

Year 2003 2004 2005 2006 2007 2008 2009 2010 2011

Gini coefficient 0.323 0.306 0.294 0.309 0.317 0.343 0.370 0.400 0.405

Source: Author’s calculations based on Plan for costs and income of urban households of the Province 2003-2011.

During the study period, the Gini coefficient for housing costs for urban households had a generally descending order, which indicates a decrease in housing costs among urban households. Of course, this decrease per se could not be a source of optimism about the housing costs of lower income groups, because unless such a decrease is accompanied by decreased equality in the income of households, it indicares an increase in the share of housing in the budget of household and hence greater pressure on them. This gap has been widening as of 2006. Lorenz curve, which was obtained in this study by drawing households’ cumulative frequency for cumulative percentage of housing costs, was used to further explain the degree of inequality. The more the Lorenz curve distance from the equal distribution line, the more inequality.

Diagram (4): Lorenz curve of the gap between housing costs in income Deciles of urban households in 2011.

4.4.6. Effective housing demand in income Deciles in terms of substructure area in Isfahan city

The following equation was used to estimate the effective demand of housing units in income Deciles of Isfahan urban households.

In the above equation:

Q stands for the amount of effective demand per square meter

CH represents housing costs of household

Bu stands for the substructure of the housing unit of household

Table 12 presents in detail the average amount of effective demand in different years in each of the income groups in Isfahan city.

Table 12. Amount of effective demand in different years in each income Decile of Isfahan city (square meter)

Year

Decile 2003 2004 2005 2006 2007 2008 2009 2010 2011

Decile 1 4.6 7 4.5 6 5 4 2 1.3 1.9

Decile 2 7.5 12 8 7.8 5.5 4.5 3.7 2.4 2.5

Decile 3 9.4 14 8.5 9 5.6 5.6 5.4 3 3.8

Decile 4 11 15.5 10 10.1 6.6 7 6 5 4.8

Decile 5 13 17 12.4 12 7.8 8 8 4.9 5.5

Decile 6 15 21 13 14 10.5 9 9 5.6 6.7

Decile 7 17 26.2 14 17 12 10.1 11 1.7 6.4

Decile 8 21 28 9 33 14.8 10.6 17 8 7.4

Decile 9 26 31 23 39 26.2 13 19 12 9

Decile 10 49 45 41 45 31 23 25.6 24 17

Source: Author’s measurements based on the Plan for Costs and Income of Urban Households of the province, 2003-2011

Based on the combination of variations of the two factors of income of households of Isfahan city and variation of housing price in urban areas, the outcome of changes in the effective demand among different income Deciles of Isfahan city were presented. As displayed by Table 12, investigation of effective demand among income Deciles of the city indicates a wide gap in terms of affording housing between high-income groups and low-income groups of households of Isfahan city.

A more important point is that based on the results, effective demand for housing units had experienced a decrease in all income Deciles in the end years of the study period. Besides, with respect to the ability of effective demand among low-income groups, while Deciles 1 to 4 could afford 4 to 11 square meters of housing in 2003, these numbers reduced to 1.9 to 5 square meters in 2011.

Diagram (5): Amount of effective demand in the 4 lowest-income Deciles of Isfahan city

Diagram (6): Sum of amount of effective housing demand of income docuiels of Isfahan city, 2003-2011

4.4.7. Housing accessibility index in different income groups of Isfahan

Housing accessibility index is obtained by dividing each unit of goods by the consumer income in a certain time unit. It shows how many time periods does the consumer have to work to obtain one unit of the intended commodity. Given the fact that the annual consumer’s income is considered, assuming that the income is distributed equally among all days of the year, the accessibility index shows how many days’ income of a household can buy one square meter of a housing unit.

Accordingly, in upper income dociels, households can own housing units by saving their income in a smaller number of days than the households in the lower Deciles do. Of course, considering the annual increase in the price of each square meter of housing unit, the days whose income must be saved to buy a square meter of housing unit increased in all income Deciles. The results show that in the low-income Decile in 2003, a household can afford a square meter of housing unit by saving the complete income of 75 days, while this number rose up to 206 days in late 2011.

Table (13): Number of household days whose income is set aside in order to purchase one square meter of housing unit in Isfahan city for income Deciles in different years.

Year

Decile 2003 2004 2005 2006 2007 2008 2009 2010 2011 Mean of period

Decile 1 75 65 81 94 119 122 135 175 206 107.2

Decile 2 68 58 52 77 80 93 103 143 129 80.3

Decile 3 63 53 42 56 63 81 96 121 104 67.9

Decile 4 58 47 37 43 55 67 82 101 88 57.7

Decile 5 50 41 32 40 48 52 63 80 76 50.4

Decile 6 49 36 28 34 41 45 50 73 66 42.2

Decile 7 43 30 25 30 34 40 43 65 57 39.9

Decile 8 34 25 19 27 31 34 33 55 48 30.6

Decile 9 29 19 16 22 22 28 28 34 39 23.7

Decile 10 10 13 13 14 15 17 17 19 23 13.7

Source: Author’s calculations based on Plan for Costs and Income of Urban Households of Isfahan, 2003-2011.

Besides, while in 2001 the highest-income Deciles of Isfahan urban society needed to save 10 days’ income to afford one square meter, this number rose up to 23 days at the end of the period. The important point is the wide gap between high-income and low-income Deciles in waiting (saving) days for obtaining a square meter, which is 10.5 times between the 1st and 10th Deciles. This trend indicates increased inconvenience and inability in provision of housing.

Diagram (7): Number of days whose income is set aside to buy one square meter of housing unit.

As can be seen, in average, the households of the three upper income Deciles of Isfahan can buy a square meter of land by spending the savings of less than one month of their income, while the households of the three lower Deciles have to spend the savings of 65 days of their income to purchase a square meter of land.

Besides, housing unit accessibility index has also been determined as divided by income groups per year. Table 36.6 shows the trend continuing from 2003 to 2011.

In this regard, it should be noted that by assuming that one-third of the income of households of each income group is saved for obtaining housing, the years required to obtain an average housing unit (75 square meters) in Isfahan city for each income group is as follows:

Table 14. Number of years needed for household income (waiting period) to buy a housing unit (75 square meters in average) as divided by income Deciles 2003-2011.

Year

Decile 2003 2004 2005 2006 2007 2008 2009 2010 2011

Decile 1 48.4 33 36 58 70 75 84 91 103

Decile 2 30 21 30 38 48 50 71 85 97

Decile 3 24 18.6 26.2 29 43 42.3 63.6 64 87

Decile 4 20.2 16.5 21.8 23.4 36 35.5 50.5 58.6 68.6

Decile 5 17.2 15 17.1 20 29.6 30.7 35.4 48 53

Decile 6 15 12 16.5 18.5 26 27.8 31 40.7 47.4

Decile 7 14 10.7 15 16 22.3 23.6 25.6 34 35

Decile 8 11.1 10 14 13.3 18 21 22.2 27.8 29.6

Decile 9 9.8 8.6 10.1 9.3 15 16.1 17.5 21 24

Decile 10 5.5 5 6 5 7.5 9.5 9.2 9.2 13

Source: Author’s calculations based on Plan for Costs and Income of Urban Households of Isfahan, 2003-2011.

In sum, investigation of housing accessibility index in Isfahan city shows a substantial difference among different income groups of this city in affording housing. In addition to the fact that, compared to other income groups, the low income groups of the city have to spend a longer period in order to obtain housing, the housing accessibility index have even worsened in low-income groups over time. For instance, whereas in 2003 the households of the first Decile could afford housing by working 49 years, this number rose up to 103 years in 2011, which means this group can never afford housing under current circumstances. Besides, investigation of the conditions of the beginning and end of the period shows that the waiting period for obtaining housing has almost tripled for all Deciles, although there have been fluctuations in the increase and decrease of the waiting period.

4.5. Conclusion

Generally the trend of formation of economic and spatial duality and diversity in cities and regions commenced and was gradually consolidated after the industrial revolution in Europe and then the emergence of modernity in the periphery countries. In Iran, this trend commenced in the early 20th century, and, as a consequence, increased urbanization resulting from this growth led to the accumulation of various problems such as poverty, homelessness and poor housing in Iran’s cities. Faced with horizontal development restrictions, Isfahan city, as one of the province capitals, concentrated a great population of low-income groups in itself. Geographical distribution of low-income groups in Isfahan city is such that most of them reside in dilapidated areas. These areas include old areas containing or non-containing monuments, as well as neighborhoods with informal settlements.

The findings demonstrate that in 2011, the cost of constructing one square meter of lands in Isfahan city is 430,000 rials, whereas in the end of the study period this amount rose to 3,102,000 rials. The greatest increase in housing costs occurred in 2008 (.4), and the lowest in 2010 (%8.6). Mean housing costs for households of the upper Deciles were higher than those of households of the lower Deciles, such that the greatest increased belonged to the 10th Decile (.1) and the lowest increase belonged to the 4the Decile (.7). Generally, by 2005, the Gini coefficient of housing costs for urban households followed an increasing trend, and the gap has been widening as of 2006. Investigation of effective demand among income Deciles of the city demonstrated a wide gap between the ability to obtain housing among upper and lower income groups of Isfahan city. With respect to effective demand ability among low-income groups, it can be argued that while in 2003 Deciles 1 to 4 owned 4 to 11 square meters of land, these numbers decreased to 1.9 to 5 square meters in 2011. Based on housing accessibility index, the results demonstrated that the low income Decile couled afford one square meter of housing by saving the income of 75 days; however, at the end of the study period (2011), this number rose up to 206 days. Moreover, in 2001 while the households of the highest-income Decile of Isfahan urban society needed to save the income of 10 days to afford one square meter, this number reached up to 23 days. The important point is the gap between high-income and low-income Deciles in the expected days (saving) for obtaining one square meter which was 10.5 times between the 1st Decile and the 10th one. Furthermore, while in 2003 the households of the 1st Decile could afford their required housing by working 46 years, this number rose up to 103 years in 2011, which meant that it was practically impossible for these households to afford housing.

2016-10-23-1477206815

Fracture union

Management of trauma has always been one of the surgical subsets in which oral and maxillofacial surgeons over the years. The mandibular body is a parabola-shaped curved bone composed of external and internal cortical layers surrounding a central core of cancellous bone. The goals of treatment, are to restore proper function by ensuring union of the fractured segments and re-establishing pre-injury strength; to restore any contour defect that might arise as a result of the injury and to prevent infection at the fracture site. Since the time of Hippocrates, it has been advocated that immobilisation of fractures to some degree or another is advantageous to their eventual union. The type and extent of immobility vary with the form of treatment and may play an essential part in the overall result. In common fractures, a certain amount of time is required before bone healing can be expected to occur. This reasonable time may vary according to age, species, breed, bone involved, level of the fracture, and associated soft tissue injury.

Delayed union, by definition, is present when an adequate period has elapsed since the initial injury without achieving bone union, taking into account the above variables. The fact that a bone is delayed in its union does not mean that it will become nonunion. Classically the stated reasons for a delayed union are problems such as small reduction, inadequate immobilisation, distraction, loss of blood supply, and infection. Inadequate reduction of a fracture, regardless of its cause, may be a prime reason for delayed union or nonunion. It usually leads to instability or poor immobilisation. Also, a small reduction may be caused by superimposition of soft tissues through the fracture area, which may delay healing.

Nonunion defined as the cessation of all reparative processes of healing without a bony union. Since all of the factors discussed under delayed union usually occur to a more severe degree in nonunion, the differentiation between delayed and nonunion is often based on radiographic criteria and time. In humans, failure to show any progressive change in the radiographic appearance for at least three months after the period during which regular fracture union would be thought to have occurred is evidence of nonunion. Malunion is defined as a healing of the bones in an abnormal position; Malunions can be classified as functional or nonfunctional. Functional malunions are usually those that have small deviations from normal axes that do not incapacitate the patient. A minimum of at least nine months has to elapse since the initial injury, and there should be no signs of healing for the final three months for the diagnosis of fracture nonunion. There are a few different classification systems of nonunions, but nonunions are most commonly divided into two categories of hypervascular nonunion and avascular nonunion. In hypervascular nonunions, also known as hypertrophic nonunion, fracture ends are vascular and are capable of biological activity. Here is evidence of callus formation around the fracture site and it is thought to be in response to excessive micromotion at the fracture site. Avascular nonunions, also known as atrophic nonunion, are caused by avascularity, or inadequate blood supply of the fracture ends. There is no or minimal callus formation, and fracture line remains visible . is nonunion requires natural enhancement in addition to adequate immobilisation to heal.

Treatment of mandibular aims in achieving the bony union, right occlusion, preserve IAN and mental nerve function, to prevent malunion and to attain optimal cosmesis. Rigid plate and screw fixation have the advantage of allowing the patient to return to the role without the need of 4–6 weeks of IMF; but the success of rigid fixation depends upon accurate reduction. During adaptation of manipulating in a champys line of osteosynthesis in symphysis region, even main bar applied to the tooth for proper occlusion, but still, the bone fragments overlap bone prominence. Gaps will be present. To achieve bone contact for healing various method are devices for the same to hold the fracture segments together like Towel clamps, Modified towel clamps. Stress patterns generated by Synthes reduction forceps, orthodontic brackets, allis forceps, manual reduction, elastics internal traction reduction, bone holding forceps, tension wire method and vacuum splints, as without which there is always a gap and inability to fix using mini plate intraoperatively. Proper alignment and reduction are essential for mastication, speech, and normal range of oral motion.

Compression during plate fixation has been shown to aid in the stability and healing process of a fracture site. The primary mechanism is thought to be due to increased contact of bony surfaces. Reduction forceps can hold large segments of bone together to increase surface contact while plate fixation is performed. An additional benefit of using reduction forceps is that a single operating surgeon can achieve plating of body fractures because the forceps hold the fracture in reduction while the plates and screws are placed. Reduction gaps of more than 1 mm between fracture segments result in secondary healing, which occurs in callus formation and increases the risk of a nonunion irrespective of any fixation method. Direct bone contact between the fracture segments promotes primary bone healing, which leads to earlier bone regrowth and stability across the fracture site. Gap healing takes place in stable or “quiet” gaps with a width more significant than the 200-μm osteonal diameter. Ingrowth of vessels and mesenchymal cells starts after surgery. Osteoblasts deposit osteoid on the fragment ends without osteoclastic resorption. The gaps are filled exclusively with primarily formed, transversely oriented lamellar bone. Replacement usually completed within 4 to 6 weeks. In the second stage, the transversely oriented bone lamellae replaced by axially orientated osteons, a process which is referred to as Haversian remodelling. Clinical experience shows that fractures that are not adequately reduced are at higher risk for malunion, delayed union and non-union and infection leading to further patient morbidity to achieve bone contact plate.

Studies by Choi et al. using silicon mandibular models have established the optimum position of the modified towel clamp–type reduction forceps relative to symphyseal and parasymphyseal fractures. Fractured models were reduced at three different horizontal levels: midway bisecting the mandible, 5 mm above midway, and 5 mm below midway. Besides, engagement holes were tested at distances of 10, 12, 14, and 16 mm from the fracture line. The models were subjected to heating up to 130°C for 100 minutes and then were cooled to room temperature. Stress patterns were then evaluated using a polariscope. Optimal stress patterns (defined as those distributed over the entire fracture site) were noted when the reduction forceps were placed at the midway or 5 mm below midway and at least 12 mm from the fracture line for symphyseal or parasymphyseal fractures and at least 16 mm for mandibular body fractures.

Shinohara et al. in 2006 used two modified reduction forceps for the symphyseal and parasymphyseal fractures. One was applied at the inferior border and another one in the subapical zone of the anterior mandible, to reduce lingual cortical bone sufficiently. In the other clinical studies, the reduction was achieved by using one clamp or forceps in the anterior and posterior region of the mandible.

One study describes that two monocortical holes were drilled, each 10 mm from the fracture line (Žerdoner and Žajdela, 1998). A second study describes monocortical holes at approximately 12 mm (Kluszynski et al., 2007) from the fracture line at midway down the vertical height of the mandible. The third study describes either monocortical or bicortical holes depending on difficulties. These difficulties are not described in detail. In this study, the distance of 5-8 mm from the fracture was chosen (Rogers and Sargent, 2000) at the inferior margin of the mandible.

Taglialatela Scafati et al., (2004) used elastic rubber bands stretched between screws placed across both sides of the fractured parts to reduce mandibular and orbit- maxillary fractures. Orthodontic rubber bands and two self-tapping monocortical titanium screws with 2 mm diameter and 9-13 mm length used. The heads of the screws protruded about 5 mm and the axis had to be perpendicular to the fracture line. It is similar in concept to other intraoperative methods of reduction used in orthopaedic or maxillofacial surgery such as the tension band technique or the Tension Wire Method (TWM), where EIT utilises rubber bands tightened between monocortical screws placed onto the fracture fragments

Vikas and Terrence Lowe in 2009 in their technical note on Modification of the elastic internal traction method for temporary inter-fragment reduction prior to internal fixation described a simple and effective modification of the Elastic Internal Traction method as previously described by Scafati et al.The modification utilizes 2 mm AO mono-cortical screws and elastomeric orthodontic chain (EOC) instead of elastic bands. 9–12mm length mono-cortical screws strategically placed to a depth of 4–5 mm approximately 7 mm either side of the fracture.

Based on studies by Smith at el in 1933 a series of 10 x 1 cm ‘turns’ of the elastic should resist a displacing force of between 30.-40 Newtons approximately.

Degala and Gupta, (2010) used comparable techniques for symphyseal, parasymphyseal and body fractures. Titanium screws with 2 mm diameter and 8 mm length were tightened at a distance of 10-20 mm from the line of fracture, and around 2 mm screw length remained above the bone to engage a 24 G wire loop. However, before applying this technique, they used IMF.

Rogers and Sargent in 2000 modified A standard towel by bending two ends of a clamp approximately 10 degrees outward and was done to prevent disengagement from the bone. Kallela et al. in 1996 modified a standard AO reduction forceps through shortening the teeth and made notches at the ends to grasp tightly in the drill holes. Shinohara et al. in 2006 used two modified reduction forceps: one was positioned at the inferior border and the other in the neutral subapical zone.

Choi et al. in 2005 included two treatment groups (reduction forceps and IMF group) and used a scale of 1 to 3 to assess the accuracy of anatomic reduction in the radiographic image. A score of 1 indicated a poorly reduced fracture which required a second operation, while a score of 2 indicated a slight displacement but an acceptable occlusion. A score of 3 indicated a precise reduction. The reduction forceps group had a higher number of accurate anatomic alignments of the fractures than the IMF group.

New reduction forceps were developed by Choi et al., 2001; Choi et al., 2005 for mandibular angle fractures based on the unique anatomy of the oblique line and body; one end of the forceps designed for positioning in the fragment medial to the oblique line, and another end was placed in the distal fragment below the oblique line . The reduction-compression forceps of Scolozzi and Jaques, in 2008 was designed similar to standard orthopaedic atraumatic grasping forceps.

Zerdoner and Zajdela in 1998 used a combination of self-cutting screws and a repositioning forceps which has butterfly-like shaped prongs. First, two screws fastened on each side of the fracture line and then the reposition forceps is placed over the heads of the screws

The use of reduction forceps has known for many years in general trauma surgery, orthopaedic surgery and plastic surgery. In OMF surgery traditionally the dental occlusion was used to perform and check reduction of mandibular fractures. Notwithstanding this historical background, reduction forceps can be used in mandibular fractures as in any other fracture as long as there is sufficient space and as long as the fracture surface permits stable placement and withstands the forces created by such a forceps.

George concluded by saying that the use of IMF for the management of angle fractures of the mandible is unnecessary provided there is a skilled assistant present to help manually reduce the fracture site for plating.

Other fracture reduction methods such as traction wire or elastic tension on screws are simple to use in the area of anterior mandibular fractures. This method may cause a gap at the lingual side of the fracture as an effect of the resultant of the force exerted on the protruding screws (Ellis & Tharanon 1992, Cillo & Ellis 2007). This lingual gap can occur as well when using reduction forceps but as they grab inside the bone and when they are positioned at a distance of the fractures site of at least 8-10 mm this should be prevented ( Žerdoner & Žajdela 1998, Rogers & Sargent 2000, Kluszynski et al. 2015). Choi et al., (2003) even suggested that tips of repositioning forceps should be placed at least 12mm from each site of the fracture line in case of symphyseal and parasymphyseal fractures. In the mandibular body fractures, adequate stress pattern at the lingual site found at least 16 mm from the fracture line.

Traditional wiring is a potential source of ‘needlestick’ type injury in the contaminated environment of the oral cavity and represents a health risk to surgeons and assistants. Conventional elastic or rubber rings may be difficult to place, and large numbers often need to be applied to prevent displacement of the fragments from the wafer. Such elastic exerts a pull of approximately 250-500 g per ‘turn’ depending on its specification (De Genova et al., 1985), and multiple ‘turns’ around anchorage points increase the firmness of retention. It is resilient, and even if displaced by stretching, tends to return the segments to their correct location in the splint or wafer, whereas wire ties once pulled, or if inadequately tightened become passive and allow free movement. The chain is relatively expensive, but the ease of use and the rapidity and flexibility with which it can be applied and retrieved save valuable operating time. It can be cold sterilised if desired and is designed to retain its physical properties within the oral environment. On removal, unlike wires and elastic rings which easily break or tear and may be difficult to retrieve from the mouth or wound, it can be recovered in one strip and, as an additional check, the holes can. The force exerted by elastic modules is known to decrease over time (Wong, 1976) and the strength decays by 17-70% (Hershey & Reynolds, 1975; Brooks & Hershey, 1976) over the first 24 h, depending on the precise material and format of the chain, and whether it has been pre-stretched (Young & Sandrik, 1979; Brantley et al., 1979).

Symphysis, parasymphysis, and mandibular body can be differentiated from other regions of the mandible because of a ridge of compact cortical bone (alveolar ridge) located on its cranial aspect that allows for tooth-bearing. This horizontally oriented tooth-bearing portion then becomes vertically oriented to form its articulation with the cranium. The change in orientation occurs at the mandibular angle, and subsequently, the mandible continues as the mandibular body and condyle as it travels Along the entire course of the mandible are muscle attachments that place dynamic internal forces on the mandible. These muscles can be divided into two primary groups: muscles of mastication and suprahyoid muscles. The muscles of mastication include the medial and lateral pterygoids, the temporalis, and masseter muscles. Together these muscles aid in chewing by generating forces along the posterior aspects of the mandible (angle, ramus, coronoid process).

Furthermore, two of the muscles of mastication, the medial pterygoid and masseter muscles, combine to form the pterygomasseteric sling, which attaches at the mandibular angle. Conversely, the suprahyoid group (digastric, stylohyoid, mylohyoid, and geniohyoid) functions, in part, to depress the anterior mandible by applying forces to the mandibular symphysis, parasymphysis, and a portion of the body. Together, these muscle attachments act to place dynamic vectors of force on the mandible that, when in continuity, allow for proper mandibular function, but when in discontinuity, as occurs with mandible fractures, can potentially disrupt adequate fracture healing. Works of literature looking at the relationship between the timing of surgery and subsequent outcomes have demonstrated no difference in infectious nonunion complications between treatment within or after three days status postinjury but did find that complication because of technical errors increased after this time. As a result, the authors commented that if surgery was to commence or more days after the injury, a technically accurate surgery by the surgeon is necessitated to overcome factors such as tissue oedema and inflammation. In cases where a delay in treatment is necessary, consideration should be given for temporarily closed fixation to reduce fracture mobility and patient pain.

Treating mandibular fractures involves providing the optimal environment for bony healing to occur: adequate blood supply, immobilisation, and proper alignment of fracture segments. Plate length is generally determined to allow for the placement of more than one screw on either side of the fracture to nullify the dynamic forces that act on the mandible. In ideal conditions, three screws are placed on either side of the fracture segments to allow for assurance against inadequate stabilisation, with screws placed at least several millimetres from the fracture site. Proper plate thickness determined by the forces required to stabilise fractured bone segments. Options for stabilisation can be divided into either load-sharing fixation or load-bearing fixation. Mandible that would only require monocortical plates to allow for stable fixation along the symphysis, parasymphysis, and angle of the mandible. These regions have subsequently been called Champy’s lines of tension, with the superior portion of lines also referred to as the tension band of the mandible.

A study by George Dimitroulis in 2002 in which he gave Postreduction orthopantomograph scoring criteria. These radiographs were assessed using a score of from 1 to 3. A score of 3 given to radiologic evidence of an accurate anatomic reduction in the fracture site. A score of 2 assigned to reduced fractures that were slightly displaced but had a satisfactory occlusion. The lowest score of 1 was for poorly decreasing fractures that required a second operation to correct the poor alignment and unacceptable occlusion.

The assessment of fracture healing is becoming more and more critical because new approaches used in traumatology. The biology of fracture healing is a complex biological process that follows specific regenerative patterns and involves changes in the expression of several thousand genes. Although there is still much to be learned to comprehend the pathways of bone regeneration fully, the overall paths of both the anatomical and biochemical events have thoroughly investigated. These efforts have provided a general understanding of how fracture healing occurs. Following the initial trauma, bone heals by either direct intramembranous or indirect fracture healing, which consists of both intramembranous and endochondral bone formation. The most common pathway is incidental healing, since direct bone healing requires an anatomical reduction and rigidly stable conditions, commonly only obtained by open reduction and internal fixation. However, when such conditions achieved, the direct healing cascade allows the bone structure to immediately regenerate anatomical lamellar bone and the Haversian systems without any remodelling steps necessary It is helpful to think of the bone healing process in a stepwise fashion, even though in reality there is an excellent overlap among these different stages. In general, it is possible to divide this process into an initial hematoma formation step, followed by inflammation, proliferation and differentiation, and eventually ossification and remodelling. Shortly after a fracture occurs, the vascular injury to periosteum, endosteum, and the surrounding so tissue causes hypoperfusion in the adjacent area. The coagulation cascade is activated which leads to the formation of a hematoma rich in platelets and macrophages. Cytokines from these macrophages initiate an inflammatory response, including increased blood ow and vascular permeability at the fracture site. Mechanical and molecular signals dictate what happens subsequently. Fracture healing can occur either through direct intramembranous healing or more commonly through indirect or secondary healing. The significant difference between these two pathways is that direct healing requires absolute stability and lack of interfragmentary motion, whereas, in secondary healing, the presence of interfragmentary motion at the site of fracture creates relative stability. In secondary healing, this mechanical stimulation in addition to the activity of the inflammatory molecules leads to the formation of fracture callus followed by woven bone which eventually remodelled to lamellar bone. At a molecular level secretion of numerous cytokines and proinflammatory factors coordinate these complex pathways. Tumour necrosis factor-𝛼 (TNF-𝛼), interleukin-1 (IL-1), IL-6, IL-11, and IL-18 are responsible for the initial inflammatory response. ReRevascularisationan essential component of bone healing, s achieved through different molecular pathways requiring either angiopoietin or vascular endothelial growth factors (VEGF) ).EGF’s importance in the process of bone repair hahas shown any studies involving animal models. S the collagen matrix invaded blood vessels, the mineralisation the so callus occurs through the activity of osteoblasts resulting in hard callus, which is remodelled into lamellar bone. Inhibition of angiogenesis in rats with closed femoral fractures completely prevented healing and resulted in atrophic non-unions

If the gap between bone ends is less than 0.01 mm and interfragmentary strain is less than 2%, the fracture unites by so-called contact healing. Under these conditions, cutting cones are formed at the ends of the osteons closest to the fracture site. The tips of the cutting cones consist of osteoclasts which cross the fracture line, generating longitudinal cavities at a rate of 50–100 μm/day. The primary bone structure is then gradually replaced by longitudinal revascularized osteons carrying osteoprogenitor cells which differentiate into osteoblasts and produce lamellar bone on each surface of the gap. This lamellar bone, however, is laid down perpendicular to the long axis and is mechanically weak. This initial process takes approximately 3 and eight weeks, after which a secondary remodelling resembling the contact healing cascade with cutting cones takes place. Although not as extensive as endochondral remodelling, this phase is necessary to fully restore the anatomical and biomechanical properties of the boneDirect bone healing first described in radiographs after complete anatomical repositioning and stable fixation. Its features are lack of callus formation and disappearance of the fracture lines. Danis (1949) described this as soudure autogène (autogenous welding). Callus-free, direct bone healing requires what is often called “stability by interfragmentary compression” (Steinemann, 1983).

Contact healing of the bone means healing of the fracture line after stable anatomical repositioning, with perfect interfragmentary contact and without the possibility for any cellular or vascular ingrowth. Cutting cones can cross this interface from one fragment to the other by remodelling the Haversian canal. Haversian canal remodelling is the primary mechanism for restoration of the internal architecture of compact bone. Contact healing takes place over the whole fracture line after perfect anatomical reduction, osteosynthesis, and mechanical rest. Contact healing is only seen directly beneath the miniplate. Gap healing takes place in stable or “quiet” gaps with a width more significant than the 200-μm osteonal diameter. In- growth of vessels and mesenchymal cells starts after surgery. Osteoblasts deposit osteoid on the fragment ends without osteoclastic resorption. The gaps are filled exclusively with primarily formed, transversely oriented lamellar bone. Replacement usually completed within 4 to 6 weeks. In the second stage, the transversely oriented bone lamellae replaced by axially orientated osteons, a process which referred to as Haversian remodelling. After ten weeks the fracture is replaced by newly reconstructed cortical bone. Gap healing is seen, for example, on the inner side of the mandible after miniplate osteosynthesis. Gap healing plays a vital role in direct bone healing. Gaps are far more extensive than contact areas. Contact areas, on the other hand, are essential for stabilisation by interfragmentary friction. Contact areas protect the gaps against deformation. Gap healing was seen far from the plate.

Ultrasound is unable to penetrate cortical bone, but there is evidence that it can detect callus formation before radiographic changes are visible. Moed conducted a larger prospective study which showed that ultrasound findings at 6 and nine weeks have a 97% positive predictive value (95% CI: 0.9-1) and 100% sensitivity in determining fracture healing in patients with acute tibial fractures treated with locked intramedullary nailing [52]. Time to the determination of healing was also shorter using ultrasound (6.5 weeks) compared to a nineteen-week average of radiographic data (𝑃 < 0.001). Ultrasound has additional advantages over other imaging modalities including lower cost, no ionising radiation exposure, and bisnoninvasive. HHowever, ts use and interpretation of findings are thought to be highly dependent on operator’s expertise. Furthermore, thick layers of such tissue can obscure an adequate view of bones with ultrasound. CT scans showed some advantages over radiographs in early detection of fracture healing in radius fractures. A limitation of CT is a beam-hardening artefact from internal and external fixation. Ultrasound is unable to penetrate cortical bone, but there is evidence that it can detect callus formation before radiographic changes are visible. The author concluded that When used to evaluate hindfoot arthrodeses, plain radiographs may be misleading. CT provides a more accurate assessment of the healing, and we have devised a new system to quantitate the fusion mass. In seven cases MDCT led to operative treatment while on X-ray the treatment plan was undecided. Bhattacharyya et al. examined the evaluation of tibial fracture union by CT scan and determined an ICC of 0.89, which even indicates excellent agreement. These studies suggest that using CT scan has high inter-observer reliability, better than the inter-observer reliability of plain radiography. According to the authors, interobserver reliability of MDCT scan is not higher than conventional radiographs for determining non-union. However, an MDCT scan did lead to a more invasive approach in equivocal cases.MDCT provides superior diagnostic accuracy to panoramic radiography and has been to characterise mandibular fracture locations with greater certainty. Because of the high soft tissue contrast, MDCT may reveal the relation of a bone fragment and adjacent muscle, needing and the existence of some foreign bodies in traumatic injury. So in cases of severe injuries of soft tissue, an MDCT is mandatory. A 33% CT fusion ratio threshold could accurately discriminate between clinical stability and instability By 36 weeks, healing was essentially complete according to both modalities, although there still were small gaps in the callus detectable on computed tomography but not on plain films. Authors concluded by stating that Computed tomography may be of value in the evaluation of fractures of long bones in those cases in which clinical examination and plain radiographs fail to give adequate information as to the status of healing. A study in 2007 used a PET scan with Fluoride ion in the assessment of bone healing in rats with femur fractures. Fluoride ion deposits in regions of the bone with high osteoblastic activity and high rate of turnover, such as endosteal and periosteal surfaces. They concluded that Fluoride ion PET could potentially play an essential part in the assessment of fracture healing given its ability to quantitatively monitor metabolic activity and provide an objective evaluation of fracture repair. 18F-fluoride PET imaging, which is an indicator of osteoblastic activity in vivo, can identify fracture nonunions early point and may have a role in the assessment of longitudinal fracture healing. PET scans using 18F-FDG were not helpful in differentiating metabolic activity between successful and delayed bone healing. Moghaddam et al. conducted a prospective cohort study to assess changes in serum concentrations of a few serologic markers in normal and delayed fracture healing. He was able to show significantly lower levels of tartrate-resistant acid phosphatase 5b (TRACP 5b) and C-terminal cross-linking telopeptide of type I collagen (CTX) in patients who developed non-unions compared to patients with normal healing. TRACP 5b is a direct marker of osteoclastic activity and bone resorption, while CTX is an indirect measure of osteoclastic activity by reflecting collagen degradation Secretion of many of the cytokines and biologic markers is also influenced by other factors. For example, systemic levels of TGF-𝛽 were found to vary based on smoking status, age, gender, diabetes mellitus, and chronic alcohol abuse at different time points. On plain radiography, it is difficult to distinguish between desired callus formation and pseudoarthrosis. Therefore CT is an essential objective diagnostic tool to determine healing status. Computed tomography (CT) is superior to plain radiography in the assessment of union and visualising of fracture in the presence of abundant callus or overlaying cast. There have been studies to test the accuracy and efficacy of computed tomography in the assessment of fracture union in clinical settings. Bhattacharyya et al. showed that computed tomography has 100% sensitivity for detecting nonunion; however, it’s limited by a low specificity of 62%. Three of the 35 patients in the study were misdiagnosed as tibial nonunion based on CT scan findings but were healed when the fracture was visualised during surgical intervention. Seventy-seven studies involved the use of clinical criteria to define fracture union. The most common clinical standards were the absence of pain or tenderness (49%), the lack of pain or tenderness on palpation or physical examination (39%), and the ability to bear weight. The most common radiographic definitions of fracture-healing in studies involving the use of plain radiographs were bridging of the fracture site by callus, trabeculae, or bone (53%); bridging of the fracture site at three cortices (27%), and obliteration of the fracture line or cortical continuity (18%). Most commonly reported criteria for radiographic assessment of fracture union according to the location of the fracture. Two studies did not involve the use of plain radiographs to assess fracture-healing. In the study in which computed, tomography was used, the union defined as bridging of >25% of the cross-sectional area at the fracture site. In the study in which ultrasound was used, a union defined as the complete disappearance of the intramedullary nail on ultrasound imaging at six weeks or progressive removal of the intramedullary nail with the formation of periosteal callus between six and nine weeks following treatment.

Plain radiography is the most common way in which fracture union assessed, and a substantial number of studies defined fracture union by radiographic parameters alone. Hammer et al. combined cortical continuity, the loss of a visible fracture line, and callus size in a scale to assess fracture- healing radiographically but found conventional radiographic examination challenging to correlate with fracture stability and could not conclusively determine the state of the union. In animal models, cortical continuity is a good predictor of fracture torsional strength, whereas the callus area is not. Also, clinicians cannot reliably determine the concentration of a healing fracture by a single set of radiographs and are unable to rank radiographs of healing fractures in order of strength. Therefore, we rely heavily on a radiographic method without proven validity for predicting bone strength in the assessment of fracture union.

Computed tomography eliminates the problem of overlapping structures and allows axial sections to limit imaging of bone bridging CT directly in the evaluation. However, in fractures treated with external fixators, CT can determine the increasing amount of callus formation, which indicates favourable fracture healing . in this study CT was correlated with fractionmetry in the assessment of fracture healing of tibial shaft fractures. The amount of callus was serially quantified and correlated with fractionmetry . after axial imaging, two equal slices at two points of the fracture were analysed 1, 6, 12, 18 weeks after stabilisation. The principal fracture line was selected for longitudinal measurement because maximum callus formation was expected at that level . a rectangular region of interact within 200-2000 and 700-2000 HU. The callus was measured automatically after marking the area of interest. Multiple measurements after repositioning the limb were performed to evaluate the short-term precision of the method. The new formation of callus indicated stability of the fracture healing on CT after 12 weeks. Although the amount of callus is only an indirect indicator of fracture union, CT was able to assess the fracture stability. The ROC analysis showed that an increase > 50% of callus formation after 12 weeks indicated stability with a sensitivity of 100% and a specificity of 83 %.

In the study in which computed, tomography used, the union was defined as bridging of >25% of the cross-sectional area at the fracture site. In the study in which ultrasound was used, the union was defined as the complete disappearance of the intramedullary nail on ultrasound imaging at six weeks or progressive removal of the intramedullary nail with the formation of periosteal callus between six and nine weeks following treatment. One hundred and twenty-three studies proved to be eligible. Union was defined by a combination of clinical and radiographic criteria in 62% of the reviews, from radiographic criteria only in 37%, and by clinical tests just 1%. Twelve different approaches were used to define fracture union clinically, and the most common rule was the absence of pain or tenderness at the fracture site during weight-bearing. In studies involving the use of plain radiographs, eleven different approaches were used to define fracture union, and the most common criterion was bridging of the fracture site.

Several factors predispose a patient to be nonunion of bones, including mechanical instability, loss of blood supply, and infections. Bone production has been estimated to occur within 15 weeks after osteotomy; complete bone healing may take 3–6 months or even longer. The reliability of conventional radiographs for the determination of fracture healing has questioned in previous studies. CT has been used for the monitoring of bone production and fracture healing, and its advantages over conventional radiography in early fracture healing have reported. To avoid stairstep artefacts in CT, the isotropic or near-isotropic resolution is necessary and has become attractive with the introduction of MDCT scanners. Experimental studies have shown that MDCT reduces stairstep artefacts with multiplanar reconstruction when compared with single-detector CT From these data, authors reconstructed thin axial slices with 50% overlap to yield near-isotropic voxels (almost identical to the length of the voxel in the x, y, and z-axes) for further processing. This allows 2D and 3D reconstructions with a resolution similar to the source images that form the basis of good-quality multiplanar reconstructions (MPRs). MPRs reconstructed from contiguous axial slices ranging from 1.5 to 3 mm thick, depending on the anatomic region. Orthogonal to the fracture or arthrodesis plane. Fusion of osseous structures was scored with a semiquantitative approach for both techniques (MDCT, digital radiography) as complete (c), partial (p), and no bone bridging (0). Definitions of fusion were as follows: full, bone bridges with no gap; partial, some bone bridges with gaps between; and no bridging, no osseous bridges. Two musculoskeletal radiologists assessed all MDCT examinations and digital radiographs in a consensus interpretation.

Conventional tomography has been used for many years for the evaluation of the postoperative spine after posterior spinal arthrodesis. Thin-section tomography had good correlation with surgery in the diagnosis of pseudarthrosis after fusions for scoliosis and was superior to anteroposterior, lateral, and oblique radiography. However, conventional tomography also suffers from certain disadvantages. The standard linear movement is mechanically easy to produce but will give rise to rather thick tomographic sections and a short blurring path (the length of the tomographic section). If thinner parts are required, more complex movements are needed. Because conventional tomography does not entirely blur out all distracting structures, the inherent lack of sharpness of the traditional tomographic image could assess bone bridges problematic. Thinner sections of conventional tomography, in particular, suffer from greater background blur. In dentistry radiology, the technique is called orthopantomography and is still widely used, although for practical reasons other conventional tomographic methods have been mostly replaced by CT, and the commercial availability of traditional tomography scanners has decreased substantially.

CT eliminates the blurring problem of conventional tomography and increases the perceptibility of fracture healing. MDCT has the advantage that the X-ray beam passes through the whole volume of the object in a short time, and, when using isotropic or near-isotropic resolution, volumetric imaging with the reconstruction of arbitrary MPRs is useful. The CT technique also has an essential impact on the severity of artefacts, with high milliampere-second and high peak kilovoltage settings leading to the reduction of artefacts. With MDCT and low pitches, the high tube current is achieved, which is the basis for good-quality MPRs. With 16-MDCT scanners, the trend is first to reconstruct an overlapping secondary raw data set and then to obtain MPRs of axial, coronal, or arbitrarily angulated sections with a predefined section width. Bone bridges are high-contrast objects and are reliably detected on 1.5- to 3-mm-thick MPRs, depending on the anatomic region, with thicker MPRs preferable for the lumbar spine and somewhat thinner MPRs superior for the hand region

The use of computed tomography (CT) scanning technology improves anatomical visualisation by offering three-dimensional reconstructions of bony architecture and has contributed to the assessment of healing in certain fractures. However, CT scans and plain radiographs detect mineralised bone formation, which is the late manifestation of the fracture healing process.

Moreover, CT scans demonstrate low specificity in the diagnosis of fracture nonunions in long bones

MRI has not been useful in evaluating delayed fracture healing in the long bones. Scintigraphic studies with 99mTc-labeled compounds have also been used to assess carpal bones; however, multiple studies have demonstrated no significant differences in tracer uptake between tibia fractures that usually heal and those that form nonunions.

In our study 48 patients were divided equally into two groups: group A (study group) and group B (control group) based on reduction method to compare the accuracy of reduction and bone healing of mandible fractures using elastic guided reduction v/s bone reduction forceps. Where both groups were evaluated based on sex, type of mandible fracture, confined or nonconfined , intermaxillary fixation method , type of reduction method used , postoperative opg scores , and ct scan assessment scores after 6 weeks for lingual , buccal cortices and medullary bone , calculation of fusion percentage using ct scan , and development of any late post-op complication .

So based on sex, fracture types, intermaxillary fixation method, late post-op complications post-op opg assessment scoring, ct scan assessment scoring fusion percentage results were nil significant (P <0.5). Whereas based on the confined or non-confined type of fractures results were substantial Which denotes that use of bone holding forceps for non confined type of fracture (P 0.011)

2018-10-6-1538819225

Biological development: college admission essay help

Biological Beginnings:

Each human cell has a nucleus which contains chromosomes made up of deoxynucleic acid, or DNA. DNA contains the genetic information, or genes, that are used to make a human being. All typical cells in a human body have 46 chromosomes arranged in 23 pairs, with the exception of the egg and sperm. During cell reproduction, or mitosis, the cell’s nucleus duplicates itself and the cell divides and two new cells are formed. Meiosis is a different type of cell division in which eggs and sperm, or gametes, are formed. During meiosis, a cell duplicates its chromosomes, but then divides twice, resulting in a cell with 23 unpaired chromosomes. During fertilization, an egg and sperm combine to form a single cell, zygote, with information from both the mother and the father.

The combination of the unpaired chromosomes leads to variability in the population because no two people are exactly alike, even in the case of identical twins. A person’s genetic make-up is called their genotype; this is the basis for who you are on a cellular level. A person’s phenotype is what a person’s observable characteristics are. Each genotype can lead to a variety of phenotypes. There are dominant and recessive genes contained in the genetic material that we acquire. For example, brown eyes are dominant over blue eyes, so if the genetic code is available for both, brown eyes will prevail.

Abnormalities can also be linked to the chromosomes and genes that are inherited from your parents. Some examples are down syndrome, cystic fibrosis, and spina bifida. This is caused when either chromosomes or genes are missing, mutated or damaged.

Genetically, I received my height and brown eyes from my mother, and my brown hair from both my parents. As far as I know, I don’t have any abnormalities linked to my chromosomes or genes that were passed down during my conception.

Prenatal/Post-partum:

The prenatal stage starts at conception, lasts approximately 266 days, and consists of three different periods: germinal, embryonic and fetal. This is an amazingly complex time that allows a single cell composed of information from both the mother and the father to create a new human being.

The first period of the prenatal stage occurs in the first two weeks after conception and is called the germinal period. During this time the zygote (or fertilized egg) begins its cell divisions, through mitosis, from a single cell to a blastocyst, which will eventually develop into the embryo and placenta. The germinal period ends when the blastocyst implants into the uterine wall.

The second period of prenatal development that occurs in weeks two through eight after conception is called the embryonic period. During this time, the blastocyst from the first stage develops into the embryo. Within the embryo, there are three layers of cells that form: the endoderm, which will develop into the digestive and respiratory systems, the ectoderm which will become the nervous system, sensory receptors and skin parts, and the mesoderm which will become the circulatory system, bones, muscles excretory system and reproductive system. Organs begin to form in this stage also. During this stage, the embryo development is very susceptible to outside influences from the mother such as alcohol consumption and cigarette usage.

The fetal period is the final period of the prenatal stage which lasts from two months post conception until birth. It is the longest period of the prenatal stage. During this period, continued growth and development occur. At approximately 26 weeks post conception, the fetus would be considered viable, or able to survive outside the mother’s womb. If birth would occur at 26 weeks, the baby would most likely need help breathing at this point because the lungs are not fully mature, but all organ systems are developed and can function outside of mom.

The brain development during the prenatal period is also very complex, and if you think about it, an amazing thing. When a baby is born, they have 100 billion neurons that handle processing information. There are four phases of brain development during the prenatal period: formation of the neural tube, neurogenesis, neural migration and neural connectivity.

During the prenatal period, a wide variety of tests can be performed to monitor the development of the fetus. The extent to which testing is used, depends on the doctors’ recommendations as well as the mothers age, health and potential genetic risk factors. One common test utilized is the ultrasound. This is a non-invasive test that is used to monitor the growth of the fetus, look at structural development and determine the sex of the baby. Other tests that are available, but are more invasive and riskier for both the fetus and the mother, include Chorionic villus sampling, amniocentesis, fetal MRI, maternal blood screening.

The mother’s womb is designed to protect the fetus during development. However, if a mother doesn’t take care of herself, it can have a negative impact on the developing fetus. A woman should also avoid x-rays and certain environmental pollutants during the pregnancy. A woman should avoid alcohol, nicotine, caffeine, drugs and teratogens. They should also have good nutrition during the pregnancy as the fetus relies solely on the mother for its nutrients during development. Along with good nutrition, extra vitamins are also recommended during the pregnancy period, the main one recommended in folic acid. Emotional health is also very important. Higher degrees of anxiety and stress can also be harmful to the fetus and have long term effects on the child.

The birth of a child marks the transition from the prenatal to post-partum stage, which last from approximately 6 weeks, or until a mother’s body is back to her pre-pregnancy state. During this time a woman may be sleep deprived due to the demands of the baby and trying to take care of any other family members. There are also hormonal changes that woman experiences as well as the uterus returning to its normal size. Emotional adjustments are also occurring during this stage. It is common for most women to experience the post-partum blues, in which they feel depressed. These feelings can come and go, and usually disappear within a couple of weeks. If major depression occurs beyond this time, this is referred to as postpartum depression and it is important for a woman to get treatment to protect herself and her baby.

My prenatal development and delivery were fairly uneventful for my mother. The only complication that was had during her pregnancy was low iron levels which would cause her to pass out. Once she started on iron pills, this problem was eliminated. During her pregnancy, since it was in the early 1970’s, it wasn’t common for any testing or ultrasounds to occur unless there were major complications. As my mom said, you get pregnant and have a baby. After I was born, my mom said that she had no complications from post-partum depression or baby blues.

Infancy:

Infancy is the period of time between birth and two years of age. During this time, extraordinary growth and development occur following a cephalocaudal pattern (or top down) and a proximodistal pattern (center of body to extremities). A baby can see before it speaks, move its arms before its fingers, etc. An infant’s height increased by approximately 40 percent by the age of 1. By the age of 2, a child is nearly one-fifth of the its weight and half its height as they will be as an adult. Infants require a great deal of sleep, with the average in this period of 12.8 hours a day. The sleep an infant gets can have an impact of their cognitive functions later in life, such as improved executive function (good sleep) or language delays (poor sleep).

Proper nutrition during this period is also imperative for infant development. Breast feeding an infant exclusively during the first six months of life provides many benefits to both the infant and the mother including appropriate weight gain for the infant and a reduction in ovarian cancer for the mother. However, both breast feeding and bottle feeding are appropriate options for the baby. As the infant gets older, appropriate amounts of fruits and vegetables are important for development as well as limiting junk food.

Motor skills development is thought to follow the dynamic systems theory in this the infant assembles skills based on perceptions and actions. For example, if an infant wants a toy, he needs to learn how to reach for that toy to grasp it. An infant is born with reflexes, which are required for them to adapt to their environment before they learn anything, such as the rooting reflex and sucking reflex for eating. Some of these reflexes are specific to this age, some are permanent throughout their life, such as blinking of the eyes. Gross motor skills are the next major skill that an infant develops. These involve the large muscle groups and are skills such as holding their head up, sitting, standing, and pulling themselves up on furniture. The first year of life, the motor skills help the infant provide themselves independence, while the second year is key to honing in the skills they have learned. Fine motors skills develop secondary to gross motor skills. These include activities such as grasping a spoon and picking up food off of their high-chair tray.

Infant senses are not developed during the prenatal period. Visual acuity in the infant that is comparable to an adult, occurs by about 6 months of age. A fetus can hear in the womb, but is unable to distinguish loudness and pitch which is developed during infancy. Other senses are present, such as taste and smell, but preferences are developed throughout infancy.

Jean Piaget’s theory on cognitive development is one that is widely used. This theory stresses that children develop their own information about their surroundings, instead of information just being given to them. The first stage of Piaget’s theory is the sensorimotor stage. This stage involves infants using their senses to coordinate with their motor skills they are developing. There is some research that has been completed that states that Piaget’s theories may need to be modified. For example, Elizabeth Spelke endorses a core knowledge approach, in which she believes that infants are born with some innate knowledge system in order for them to navigate the world in which they are born into.

Language development begins during this stage also and all infants follow a similar pattern. The first sounds from birth is babbling, crying and cooing which are all forms of language. First words are usually spoken by about 13 months with children usually speaking two word sentences by about two years. Language skills can be influenced both by biological and environmental considerations in the infant.

An infant displays emotion very early in life. The first six months of their life you can see surprise, joy, anger, sadness, and fear. Later in infancy, you will also see jealousy, empathy, embarrassment, pride, shame and guilt. The later developed emotions are emotions that require thought, which is why they don’t develop until after the age of 1. Crying can indicate three different emotions in an infant – basic cry – typically related to hunger, anger cry and pain cry. A baby’s smile can also mean different things – such as a reflexive smile or a social smile. Fear is an emotion that is seen early in a baby’s life. One that is often talked about is “stranger danger” or separation protest.

There are three classifications of temperaments of a child that were proposed by Chess and Thomas. These include an easy child, difficult child, and slow-to-warm up child. These temperaments can be influenced by biology, gender, culture and parenting styles. The remaining personality traits that are developed in the period include trust, developing sense of self and independence. Erik Erickson first stage of development occurs within the first year of life with his trust vs mistrust theory. The concept of trust vs mistrust is seen throughout the development of a person and is not limited to this age group. The second year of life Erickson’s theory of autonomy vs shame and doubt. As an infant develops his skills, they need to be able to do this independently or feelings of shame and doubt develop. The development of autonomy during infancy and the toddler years can lead to greater autonomy during the adolescent years.

Social interactions occur with infants as early as 2 months of age, when they learn to recognize facial expressions of their caregivers. They show interest in other infants as early as 6 month of age, but this interest increases greatly as they reach their 2nd birthday. Locomotion plays a big part in this interaction allowing the child to independently explore their surrounding and others that may be around them. Attachment theories are widely available. Freud believes attachment is based on oral fulfillment, or typically the mother who feeds them. Harlow said that attachment is based on the comfort provided based on his experiment with wire monkeys. Erikson’s theory goes back to the trust vs mistrust theory which was talked about earlier.

As a new baby is brought into a family, the dynamic of the household changes. There is a rebalancing of social, parental and career responsibilities. The freedom that was once had prior to the baby is no longer there. Parents need to decide if a parent stays home to take care of the child or if the child is placed into a daycare setting. Parental leave allows a parent to stay home with their child for a period of time after their birth, but then requires them to be place in some type of child care setting. Unfortunately, the quality of child care varies greatly. Typically, the higher the quality, also the higher the price tag. A parent needs to be an advocate for their child and monitor the quality of care they are receiving, no matter the location they are at. There has been shown to be little effect on the children who are placed in child care instead of being cared for by a full-time parent.

As an infant, I was a bottle-fed baby. My mother was able to be home with me full time, so I was not exposed to outside childcare settings. Unfortunately for my parents, I was very colicky until I was about 6 weeks old. This was very stressful for my parents as they were adjusting to life as a family with a new baby. After the colic ended, I was a very happy, easy baby when I wasn’t sick. I developed febrile seizures about the age of 7 months and they lasted until about 2 years when I was put on phenobarbital to control them. I talked and walked at a very young age (~9 months). I was very trusting of everyone and had no attachment issues. I was happy to play by myself if no one was around, but if company was over, my parents said I always wanted to be in the middle of the action, I was especially fond of adult interactions.

Early childhood:

The next developmental stage is early childhood which lasts from around the ages of 3-5. During this stage, height and weight are slowed from the infancy stage, but a child still grows about 2 ½ inches a year and gains 5-7 pounds per year during this stage. The brain continues to develop by combining the maturation of the brain with the external experiences. During this stage, increased. The size of the brain doesn’t increase dramatically during this or subsequent periods, but the local patterns within the brain do. The most rapid growth occurs in the prefrontal cortex which is key in planning and organization as well as paying attention to new tasks. The growth during this phase is caused by the increase in the number and size of the dendrites as well as the myelination continuing during this stage.

Gross motor skills continue to increase with children being able to walk easily as well as beginning movements of hopping, skipping, climbing, etc. Fine motor skills continue to improve as well with children being able to build towers of blocks, do puzzles, or writing their name.

Nutrition is an important aspect in early childhood. Obesity is a growing health problem in early childhood. Children are being fed diets that are high in fats and lower in nutritional value. They are eating out more that they have historically. Parents need to focus on better nutrition and more exercise for their children. Childhood obesity has a strong correlation to obesity later in life.

Piaget’s preoperational stage lasts from age 2 to 7, is the second stage in his theory of development. During this stage, children begin to represent things with words, images and drawings. They are egocentric and hold magical beliefs. This stage is divided into the symbolic function substage (age 2-4) and the intuitive thought substage (age 4-7). In the symbolic substage, the child is able to scribble designs the represent objects, and can engage in pretend play. They are limited in this stage by egocentrism and animism. The intuitive thought substage, the child begins to use primitive reasoning and is curious. During this time, memory increases as well as their ability to improve their attention span.

Language development during this phase is great. A child goes from saying two word utterances, to multiple word combinations, to complex sentences. They begin to understand the phonology of language as well as the morphology. They start to also apply the rules of syntax and semantics. The foundation for literacy also begins during this stage, using books with preschoolers provides a solid foundation for which the rest of their life successes can be based.

There are many early childhood education options available to parents. One option is the child centered kindergarten which focuses on the whole child. The Montessori approach allows the children more freedom to explore and the teacher is a facilitator rather than an instructor. There are also government funded programs such as Project Head Start available for low-income families to give their children the experience they need before starting elementary school.

Erickson’s stage of development for early childhood is initiative vs guilt. This stage the child has begun to develop an understanding of who they are, but also begin to discover who they will become. Usually children of this age describe themselves in concrete terms, but some also begin to use logic and emotional descriptors. Children also begin to perceive others in terms of psychological traits. Children begin to be more aware of their own emotions, understand others emotions and how they relate to them and are also able to begin regulating emotions during this stage.

Moral development also begins during this stage. Freud talks about the child developing the superego, the moral element of personality, during this stage. Piaget said children go through two distinct stages of moral reasoning: 1) heteronomous morality and 2) autonomous morality. During the first, the child thinks that the rules are unchangeable and judge an action by the consequence, not the intention. The autonomous thinker thinks about the intention as well as the consequence.

Gender identity and roles begin to play a factor during this stage. Social influences on gender roles provide a basis for how children think. This can be through imitation of what they see their parent doing or can be through observation of what they see around them. Parental and peer influences on modeling behavior is apparent. Group size, age, interaction in same-sex groups and gender composition all are important aspect of peer relations and influences.

Parenting style vary differently. Diana Baumrind describes four parenting styles in our book: authoritarian, authoritative, neglectful and indulgent. She shows a correlation between the different parenting styles and the behaviors in children.

Play is important in the child’s cognitive and socioemotional development. Play has been considered the child’s work by Piaget and Vygotsky. This allows a child to learn new skills in a relaxed way. Make-believe play is an excellent way for children to increase their cognitive ability, including creative thought. There are many way a child can play – including sensorimotor and practice play, pretense/symbolic play, constructive play, and games. Screen time is becoming more of a concern in today’s world. They are good for teaching, but can also be distracting/disruptive if screen time is not limited.

As a young child, I was very curious about thing and loved to play pretend. I attended preschool for two years, which aided in my cognitive development. My parents said I was able to read and do age advanced puzzles by the time I was 3. I was able to regulate my emotions and understand the emotions of others. My parents utilized an authoritarian style of discipline when I was younger, being the first child, they wanted their kids to be perfect. This relaxed as my siblings came along and as we got older.

Middle/late childhood:

During this period, children maintain slow, consistent physical growth. They grow 2-3 inches per year until the age of about 11, and gain about 5-7 pounds per year. The size of the skeletal and muscular systems is the main contributor to the weight gain.

The brain volume stabilizes by the end of this stage, but changes in the structures continue to occur. During this stage, there is synaptic pruning, in which some areas of the brain which are not used as frequently lose connection, while other areas increase the amount of connections. This increase is seen in the prefrontal cortex which orchestrates the function of many other brain regions.

Development of both gross and fine motor skills continue to be refined. Children are able to ride a bike, swimming, skipping rope; they can tie their shoes, hammer a nail, use a pencil and reverse numbers less often. Boys usually outperform girls in their gross motor skills, while girls outperform boys in the fine motor skills. Exercise continues to be area of concern at this age. Children are not getting the exercise they need. Studies have shown that aerobic exercise, not only helped with weight, but also with attention, memory, thinking/behavior and creativity.

Obesity is a continued health concern for this age group which leads to medical concerns such as hypertension, diabetes, and elevated cholesterol levels. Cancer is the second leading cause of death of children in this age group. The most common childhood cancer is leukemia.

Disabilities are often discovered during this time as many don’t show up until a child is in a school setting. There are learning disabilities, such as dyslexia, dysgraphia and dyscalculia; attention deficit hyperactivity disorder (ADHD), and autism spectrum disorders, such as autistic disorder and Asperger syndrome. Schools today are better equipped to handle children with these disabilities to help them receive the education they need.

This stage of development as described by Piaget Cognitive Development Theory is the concrete operational stage. The child in this stage can reason logically, as long reasoning can be applied to concrete examples. In addition, they can utilize conservation, classification, seriation and transitivity.

Long term memory increases during this stage, in part in relation to the knowledge a child has with a particular subject. Children are able to think more critically and creatively during this period as well as increases in their metacognition. Along with the topics already mentioned: self-control, working memory and flexibility are all indicators of school readiness/success.

Changes occur during this stage on how a child’s mental vocabulary is organized. They begin to improve their logical reasoning and analytical abilities. They also become have more of metalinguistic awareness, or knowledge about language. Reading foundations are important during this stage. Two approaches currently being explored are the whole-language approach and the phonics approach. The whole-language approach teaches children to recognized words or whole sentences. The phonic approach teaches children to translate written symbols into sounds.

The child during this stage, begins to better understand themselves and are able to describe themselves utilizing psychological characteristics and can describe themselves in reference to social groups. High self-esteem and self-concept are important for this age group. Low self-esteem has been correlated to instances of obesity, depression, anxiety, etc.

Erickson’s fourth stage of development, industry vs inferiority appears in this stage. Industry refers to work and children wanting to know how things are made and how they work. Parents who dismiss this interest can create a sense of inferiority in their children.

Emotional development during this stage involves the child becoming more self-regulated in their reactions. They understand what lead up to an emotional reaction, can hide negative reactions and can demonstrate genuine empathy. They are also learning coping strategies to learn to deal with stress. Moral development continues also during this stage as proposed by Kohlberg’s 6 stages of development.

Gender stereotypes are prevalent in this development phase. They revolve around physical development, cognitive development and socioemotional development of a child.

This stage of life, parents are usually less involved with their children although they continue to remain an important part of their development. They become more of the manager, helping the child learn the rights/wrongs of their behaviors. If there is a secure attachment between the parent and the child, the stress and anxiety that is involved in this phase is lessened.

Friendships are important during this stage of a child’s life. Friends are typically similar to the child in terms of age, sex, and attitudes towards school. School is a sign of new obligations to children. As with the younger age group, there are different approaches to school at this stage. A constructivist approach focuses on the learner and having the individuals constructing their knowledge. A direct instruction approach is more structured and teacher centered. Accountability in the schools is enforced through the application of standardized testing. Poverty plays a role in the learning ability of children, oftentimes creating a barrier to learning for the student, including parents with low expectations, not able to help with the homework of inability to pay for educational materials.

My parents said that by this age I was able to reason logically with them, and in my day to day life. I remained curious about what things were and how they worked. My mom told me about a test I took for an accelerated learning program (ULE) in my elementary school. I missed one question, I couldn’t answer what a wheelbarrow was. After that, my mom said I was interested in learning what they were and what they were used for. The ULE program helped me satisfy my curiosity above and beyond what was taught in school by providing additional learning opportunities.

Adolescence:

Adolescence lasts from about 12 – 18 years of age. The primary physical change during adolescence is the start of puberty. This is a brain-neuroendocrine process that provides stimulation for rapid physical changes that take place. This is when a child takes on adult physical characteristic, such as voice changes, height/weight growth for males and breast development and menstruation begins for females. Females typically enter puberty two years prior to males. The process is hormonal driven and include actions from the hypothalamus and pituitary gland. During this time, adolescents are preoccupied with their body image, as their bodies are rapidly changing. Females are typically more dissatisfied with their bodies than males, however, body image perception becomes more positive for both genders as they end the adolescent period.

Brain development during this time includes significant structural changes. The corpus callosum thickens, improving their ability to process information. The prefrontal lobes continue to develop, increasing reasoning, decision making and self-control. The limbic system, specifically the amygdala is completely developed by this stage.

This stage also marks a time of sexual exploration, forming a sense of sexual identity, managing sexual feelings, and developing intimate relationships. Most adolescents are not emotionally prepared for sexual experiences and can lead to high risk sexual factors. Contraceptive use is not prevalent in this age group, even though it can lessen or eliminate the risk of sexually transmitted diseases and unwanted pregnancy. Teen pregnancy, while reduced from years past, is still too high. Sex education continues to be a topic of discussion as to what is most appropriate for the schools – abstinence only or education that emphasizes contraceptive knowledge.

Health during this stage of development is of concern as bad health habits learned here, can lead to death in early adult life. Obesity due to poor nutrition and lack of exercise remains a consistent theme. Sleep is also important for this age group as most reported getting less than 8 hours of sleep per night. Substance use is also seen in this age group. Another health concern is eating disorders including both anorexia and bulimia, these disorders can take over a person’s life due to distorted body images.

Piaget’s final stage of cognitive development occurs during this stage – the formal operational stage. Adolescents are not bound by concrete thoughts or experiences during this stage. They can think abstractly, idealistically, and logically.

Executive function is one of the most important cognitive changes that occurs in this stage. This involves an adolescent ability to have goal directed behavior and the ability to exercise self-control.

The transition between elementary school to junior high school during this stage can be very stressful for adolescents. It occurs during a period of time when many other physical changes (puberty) are occurring at the same time. This can create stress and worrying for the child.

Erickson’s fifth developmental stage that corresponds to this period in life is Identity vs identity confusion. This stage is aided by a psychosocial moratorium, which is the gap between adolescence and adulthood. This period a person is relatively free of responsibility to determine what their true identity is. This is the path that one takes toward adult maturity. Crisis during this stage is a period in which a person is identifying alternatives. Commitment is a personal investment in an identity. It is believed that while identity is explored during this stage, finalization does not occur until early adulthood, with life review.

Parents take on a managerial role during this stage; monitoring the choices that are made regarding friends, activities, and their academic efforts. Higher rates of parental monitoring leads to lower rates of alcohol and drug use. The adolescents need for autonomy can be hard for a parent to accept. The parents feel like the child is “slipping away” from them. There is also gender differences as far as it relates to how much autonomy is granted, with males receiving more autonomy than females. Conflict during this escalates during the early adolescent stage, but then lessens towards the end of the stage.

Friendships during this stage are often fewer, but more intimate than in younger years and take on an important role of meeting social needs. Positive friendships are associated with positive outcomes, including lower rates of substance abuse, risky sexual behavior, bullying and victimization. Peer pressure at this stage in life is high, with more conformance to peer pressure if they are uncertain about their social identity. Cliques and crowds emerge and provide a more important role during this stage of development. Dating and romantic relationships begin to evolve. Juvenile delinquency is a problem that emerges, with illegal behaviors being noted. This can be due to several factors including lower socioeconomic status, sibling relationships, peer relationships, and parental monitoring. Depression and suicide also increase during this stage of life.

During this stage of my life, I was very goal oriented, more so academically than socially. I chose to take higher level classes that weren’t required and continued to work with a program that allowed me to do projects outside of school. During this time, I began to think about what direction my life would take. I decided that I would attend college to major in pharmacy, a decision that would later be reviewed and changed.

Early adulthood:

Becoming an adult involves a lengthy transition. Early adulthood occurs from 18 to 25 years of age. During this time, an individual is still trying to figure out “who” they are, exploring career paths, determining their identity and understanding what kind of lifestyle they want to live. Early adulthood is characterized by 5 key features as explained by Jeffrey Arnett. These include: identity exploration, instability, self-focused, feeling in-between and the age of possibilities – basically they can transform their lives. In the US, entry into adulthood is primarily characterized by holding a permanent, full-time job. Other countries consider marriage the marker for adulthood. Just as going from elementary school to middle school causes stress in adolescents, the transition from high school to college can evoke the same emotions.

Peak physical performance is often reached between the ages of 19 and 26. Along with physical performance decline, body fatty tissue increases and hearing begins to decline in the last part of early adulthood. Health during early adulthood is subpar. Although most know what is required to be healthy, many fail to apply this information to themselves. The bad habits started during adolescence are increased in early adulthood, including inactivity, diet, obesity, sleep deprivation and substance abuse. These lifestyles, along with poor health, also have an impact on life satisfaction. Obesity continues to be a problem in this developmental stage. Losing weight is best achieved with a diet and exercise program rather than relying on diet alone. Exercise can help prevent diseases such as heart disease and diabetes. Exercise can also improve mental health as well and has been effective in reducing depression. Alcohol use appears to decline by the time an individual reaches their mid-twenties, but peaks around 21-22 years of age. Binge drinking and extreme binge drinking are a concern on college campuses. This can lead to missing classes, physical injuries, police interactions and unprotected sex.

Sexual activity increases in emerging adulthood, with most people having experienced sexual intercourse by the time they are 25. Casual sex is common during this development stage involving “hook-ups” or “friends with benefits”.

Piaget’s stages of development ended with formal operational thought that was discussed in the adolescent stage. However, he believes that this stage covers adults as well. Some theorists believe that it is not until adulthood that formal thoughts are achieved. An additional stage has been proposed for young adults – post-formal thought. This is reflective, relativistic, contextual, provisional, realistic and influenced by emotion.

Careers and work are an important theme in early adulthood. During this time an individual works to determine what career they want to pursue in college by choosing a major. By the end of this developmental stage, most people have completed their training and are entering the work force to begin their career. Determining one’s purpose, can help ensure that the correct field of study/career choice is made. Work defines a person by their financial standing, housing, how they spend their time, friendships and their health. Early jobs can sometimes be considered “survival jobs” that are in place just until the “career job” is obtained.

Erickson’s sixth stage of development that occurs during early adulthood is intimacy vs isolation. Intimacy, as described by Erickson, is finding oneself while losing oneself in another person, and it requires a commitment to another person. Balancing intimacy and independence is challenging. Love can take on multiple forms in adulthood. Romantic love, or passionate love, is the type of love that is seen early in a relationship, sexual desire is the most important ingredient in romantic love. Affectionate love, or compassionate love, is when someone desires to have the other person near and has a deep, caring affection for the person, typically a more mature love relationship. Consummate love involves passion, intimacy and commitment, and is the strongest of all types of love.

Adults lifestyles today are anything but conventional. Many adults choose to live alone, cohabitate, or live with a partner of the same sex in addition to the conventional married lifestyle. Divorce rates continue to remain high in the US, with most marriages ending early in the course of their marriage. Divorced adults have higher rates of depression, anxiety, suicide, alcoholism and mortality. Adults who remarry usually do so within three years of their divorce, with men remarrying sooner than women. Making a marriage work takes a great deal of commitment from both parties. John Gottman determined some principals that will help make a marriage successful. These include: establishing love maps, nurturing fondness and admiration, turning towards each other instead of away, letting the partner influence you and creating shared meaning. In addition, to these a deep friendship, respect for each other and embracing the commitment that has been made are things that will help to make a marriage last.

During early adulthood, many become parents for the first time. Sometimes this is well planned out, other times it is a complete surprise. Parenting is often a hybrid of utilizing techniques that their parents used on them and their own interpretation of what is useful. The average age an individual has their first child is increasing and the number of children they choose to have is declining. This is due to women wanting to establish their careers prior to becoming a mom. The results of this is that parents are often more mature and able to handle situations more appropriately, may have more income, fathers are more involved in the child rearing but also children spend more time in supplemental care than when mothers stayed home to provide the child care.

During early adulthood, I went to college, decided that a pharmacy major wasn’t for me and ended up obtaining a degree in microbiology and a minor in chemistry. I met my first husband during college and we ended up marrying a couple months before I graduated. After graduation, we had a child and eventually ended up getting a divorce. I think the stress of going right from college to marriage to having a family took a toll on us. We were able to maintain civility to co-parent our son even though we were not able to make our marriage work. The first few years after our divorce were very hard, being a single-mom, trying to get a career established and make sure I was providing for our child. Thankfully I had a huge support system with my parents and siblings that were able to get us through the tough times. About 10 years later, I met my now husband and was able to find the intimacy again that was needed in my life. We both have brought children from previous relationships into our marriage and have also had two children together. This has created some conflict of its own, but we work through it all together. I feel that we are much more equipped and mature to be parents of our younger children than we were when our older ones were little.

2018-7-31-1532998660

Human identification using palm print images

CHAPTER ONE

INTRODUCTION

1.1. Background

The term “identification” means the act or process of establishing the identity of or recognising, the treating of a thing as identical with another, the act or process of recognising or establishing as being a particular person, but also the act or process of making representing to be, or regarding or treating as the same or identical.

Computerized human identity is one of the most essential and difficult tasks to meet developing call for stringent protection. The usage of physiological and/or behavioral characteristics of people, i.e., biometrics, has been drastically employed within the identity of criminals and matured as an essential tool for law enforcement departments. The biometrics-primarily based automatic human identity is now rather popular in a extensive range of civilian packages and has an end up effective alternative to traditional (password or token) identification systems. Human fingers are easier to offer for imaging and may screen a diffusion of statistics. Therefore, palmprint studies has invited a number of attention for civilian and forensic usage. But, like a number of the popular biometrics (e.g., fingerprint, iris, face), the palmprint biometric is likewise at risk of sensor level spoof assaults. Far flung imaging the usage of a excessive-resolution digicam may be hired to reveal important palmprint information for feasible spoof assaults and impersonation. consequently, extrinsic biometric capabilities are predicted to be greater vulnerable for spoofing with mild efforts. In precis, the blessings of clean accessibility of those extrinsic biometric tendencies additionally generate a few concerns on privateness and security. on the other hand, intrinsic biometrics characteristics (e.g., DNA, vessel systems) require greater hard efforts to accumulate with out the knowledge of an person and, consequently, extra tough to forge. but, in civilian applications it is also crucial for a biometrics trait to ensure excessive collectability at the same time as the consumer interacts with the biometrics tool. on this context, palm-vein popularity has emerged as a promising opportunity for non-public identification. It has the benefit of the high agility.

Biometrics is authentication using biological data. It is a powerful method for authenticating. The general purpose of biometry is to distinguish people from each other by using features that cannot be copied or imitated. There is less risk than other methods because it is not possible for people to change, lose, and forget their physical properties. The use of these features, defined as biometric metrics, in ciphers is based on an international standard established by INCITS (International Committee for Information Technology).

In recent years, many people have been able to distinguish amount of work has been done. Some of the patterns studied are characters, symbols, pictures, sound waves, electrocardiograms. Usually complex difficult to interpret due to calculations or human evaluations of overload problems are used in computerized identification. The path maps to the template. In this case, a template for each pattern class the set of templates is stored in memory in the form of a database. Unknown class of each class template. The classification is based on a previously determined mapping criterion or similarity criteria. Compare the pattern with the complete pattern it is faster to compare some features rather than the more accurate result most of the time. For this reason, pattern recognition process, feature extraction and classification examined in two separate phases.

In Picture 1.2 future extraction, makes some measurements on the pattern and turns the results into a feature vector. This feature may vary considerably depending on the nature of the problem. Also, the importance ratings and costs of the features may be different. For this reason, properties should be selected to distinguish the classes from each other and to achieve lower costs.

Features are different for every pattern recognition problem.

Based on the properties extracted in the classification stage, it is decided to which class the given object belongs to. Although the feature extraction does not differ according to the pattern recognition problem, the classifiers are collected in specific categories[6].

Template mapping is the most common classification method. In this method, each pixel of the view is used as a feature. The classification is done by comparing the input image to all the class templates. The comparison results in a similarity measure between the input information and the template, with the template, the pixel-based equivalence of the input image increases the degree of similarity, while the corresponding of the corresponding pixels reduces the similarity. After all the templates are compared, the class of the template giving the most similarity grade is selected. Structural classification techniques use structural features and decision rules to classify patterns. For example; line types in characters, holes and slopes are structural properties. Rule-based classification is performed by using these extracted features. Many pattern recognition systems are based on mathematical bases to reduce misclassification. These systems are pixel-based and use structural features. Examples include Gabor features, contour properties, gradient properties, and histograms. As a classifier, classifiers including discriminant function classifiers, Bayesian classifiers and artificial neural networks can be used[1].

In its simplest terms, it is a necessary tool to process darkness, manipulate images, and two important input-output niches are demanded image digitization and imaging devices. Due to the inherent nature of these devices, images do not create a direct source for computer analysis. Since computers work with numeric values rather than with image data, the image is transformed into a numeric format before processing begins. Picture-1.1 shows how a numbered array of numbers can represent a material image. The material image is divided into small regions called “shape elements” or “pixels”. The rectangular mesh cage device, which is the most comprehensive subdivision scheme, is also shown in Picture-1.1 In the digital image, the value placed on each pixel is given the brightness of that spot.

The conversion process is called numerical conversion. This situation is completely transferred to a diagram in Picture 1.2 The brightness of each pixel is used as examples and numerically. This part of the operation shows the brightness or darkness of each pixel in that place. When this process is applied to all pixels, the image is displayed in a rectangular shape. Each pixel has a full place or a trace (number of lines and columns), and at the same time has a full value called gray level. This sequence of numeric data is now available for processing on a computer. Picture 1.3 shows the numerical state of a continuous view.

1.2. Human Identity

Human beings cannot live without systems of meaning. Our primary impulse is the impulse to find and create meaning. But just as important, human beings cannot exist without an identity and often human identity is tied closely to the systems of meaning that people create. The systems of meaning are how people express their identity.

There are many elements that shape identity- family, community, ethnicity, nationality, religion, philosophy, science and occupation. For much of history, human identity has been oriented to small bands of extended families with belief systems that validated that lifestyle. With the movement toward domestication and state formation, along with the larger communities of such states, the boundaries of human identity were widened. But the small band mentality has persisted over subsequent millennia and is still evident even within modern states in the form of ethnic divisions, religious differences, occupation, social status, and even in the form of organizational membership[1].

The presence and origin of the small band mentality can be explained in terms of the inherited animal brain and its primitive drives. Animal life from the earliest time developed an existence of small groups of extended family members. This existence was shaped by the base drives to separate from others, to exclude outsiders, and to dominate or destroy them as competitors for resources. This in-group thinking and response was hardwired into the animal brain which has continued to influence the human brain. Unfortunately, small band mentality has long had a powerful influence on the creation of systems of meaning and the creation of human identity. People have long identified themselves in terms of some localized ethnic group, religion or nation in opposition to others who are not members of their group. This has led to the exclusion of outsiders and crusades to dominate or destroy them as enemies.

More recent discovery on human origins and development confirms the early Greek intuition. We now know that we are all descendent of a common hominid ancestor (the East Africa origin hypothesis). Race is now viewed as a human construct with little if any basis in real biology. So-called racial differences amount to nothing of any real distinction in biology. One scientist has even said that, genetically, racial features are of no more importance than sunburn.

This information points to the fact that we are all descendent of Africans. In the great migrations out of Africa some early hominids moved to Europe and endured millennia of sunlight deprivation and this led to a redistribution of melanin in their skin. They still possessed the same amount of melanin as that of darker skinned people but it was not as visible.

All this is to say that the human race is indeed one family. And modern human identity and meaning must be widened to include this fact. The small band mentality of our past which focuses human identity on some limited subset of the human race has always led to the creation of division, barriers, opposition and conflict between people. It is an animal view of human identity.

But we are no longer animals. We are now human and we need to overcome the animal tendency to separate from others, to exclude them, and to view them as outsiders or enemies to be dominated or destroyed.

It is also useful to note here how tightly many people tie their identity to the system of meaning that they adopt (their belief system or viewpoint). Consequently, any challenge to their system of meaning will produce an aggressive defensive reaction. The system may contain outdated ideas that ought to be challenged and discarded but because it comprises the identity of those who hold it, they will view any challenge as an attack on their very selves and this produces the survival response or reaction. Attacks on the self (self-identity) are viewed as attacks on personal survival and will evoke the aggressive animal defense. In this reaction we see the amygdala overruling the cortex.

This defensive reaction as an attempt to protect the self helps explain in part why people continue to hold on to outdated ideas and systems of belief/meaning. The ideas may not make rational sense to more objective outside viewers but to those who hold them, they make sense in terms of the dominant themes of their overall system.

It is true that we can’t live without meaning or identity. And our identity is often defined by our systems of meaning. This tendency to tie our identity too tightly to our systems of meaning calls for a caution: Human meaning and identity should not be placed in an object- a system of meaning, an ideology, an occupation, a state, a movement, ethnicity or some organization. Our identity and our search for meaning should be focused on the process of becoming human. This orients us to ongoing development and advance. We then remain open to make changes as new information comes along. It’s about the human self as dynamic process, not rigid and unchanging object.

So from our point of view, identity is used to mean the condition of being a specified person or the condition of being oneself and not another. It clusters with the terms personality and individualism, and less fashionably, “soul”.

Figure 1.1: Human Identity by face

1.3. Palm Print

Handprint recognition supplies fingerprint matching algorithms for nature: The two biometric systems are based on personal information, which is represented by the effects seen on the lines. Statistical analysis by FBI officials reflects the fact that handprint identification is a biometric system that is complementary to more popular fingerprint recognition systems. The findings of these studies show that 70% of traces left behind by criminals in crime scenes are from fingerprints and 30% are from palms. Because of the lack of processing capabilities and lack of live-scanning technologies, palm print recognition algorithms work more slowly when automated compared to fingerprint recognition algorithms. Since 1994, there has been a growing interest in systems that use fingerprint and palm print identification together. The palm print identification is based on massive information found on the friction ridge as on the fingerprint. The palm print, or fingerprint, consists of the dark lines representing the high and pointed portions of the lines of evidence, which are in sequence, and white lines representing the valleys between these lines of relief. The palm print recognition technology uses some of these characteristics.

Algorithms used for palm print detection and verification are similar to algorithms used in finger print recognition. These algorithms are basically based on correlation, based on feature points (minutiae) and based on ridges (ridges). Correlation-based matching involves two handfuls of images taken together to find corresponding lines in two images; feature-based matching is based on the determination of the location, orientation and orientation information of specific feature points in the palm image and comparison of these information. The line-based matching technique uses the geometric characteristics of the lines as well as the texture analysis, in addition to feature point analysis, while classifying the palm print.

Correlation-based algorithms work faster than other types of techniques, but they have less tolerance to distortions and rotation variances in the image. Algorithms based on feature points require high quality imagery and do not benefit from the textural or visual qualities of the player. Finally, line-based algorithms require a high-resolution sensor to produce good-quality images, as well as distinctive features of line characteristics that are significantly less than feature points. The positive and negative aspects of these techniques also apply to fingerprinting.

William James Herschel, the son of John Herschel, was an astronomer. His father asked him to choose a career other than astronomy, so he joined the East India Company, and in 1853 was posted to Bengal. Following the Indian Mutiny of 1858, Herschel became a member of the Indian Civil Service, and was posted to Jungipoor.

In 1858 he made a contract with Mr. Konai, he was a local man, for the construction of road building materials. To prevent Konai from rejecting his signature later, Herschel suppressed his handprint in his documentary figure 2.2 shows that Mr. Konai’s palmprints. Herschel continued to experiment with hand prints, soon he realized that it was necessary to use fingers. He collected prints from his friends and family, and the result was that one’s fingerprints did not change over time. The Governor of Bengal suggested that fingerprints should be used on legal documents to prevent impersonation and refusal of contracts, but this proposal was not addressed. [1]

Now we are using palm prints and fingerprints to investigate the criminal cases, for example in a crime scene we found some palmprints and fingerprints on a subject in the crime scene then we collect this print after that we compare the prints with the people who committed crime before. Also in government’s documents we use it like a sign of a person, and in health applications, so as we see there are many areas to use the palm print images.

Figure 1.2: Palm Print

1.3.1. Palm Print Features

Palm print has stable and rich line features, three types of line patterns are visible on the palm. They are principal lines, wrinkles, and ridges. Principal lines are the longest, and widest lines on the palm. The principal lines indicate the most distinguishing direction features on the palm. Most people have three principal lines, which are named as the heart line, head line, and life line. Wrinkles are regarded as the thinner and more irregular line patterns. The wrinkles, especially the pronounced wrinkles around the principal lines, can also contribute for the discriminability of the palm print. On the other hand, ridges are the fine line texture distributed throughout the palmar surface. The ridge feature is less useful for discriminating individual as they cannot be perceived under poor imaging source. Figure 4 shows the palm lines.

1.3.2. The importance of palm print identification

Every person’s palm print is unique, so palm print identification is a perfect form of authentication.
The palm print recognition system has a high level of security because it is impossible to steal.
Palm print is used in many industries such as healthcare, aviation, education, construction and banking. Thus, palm print identification is a user-friendly system.
The size of the palm print recognition system is small and portable.
Palm print recognition system is hygienic due to contactless use.

1.4. Biometric Features

Physiological features include DNA, iris, finger prints, palm prints, and facial features, while behavioral features include mimics, signature, and sound. When measuring physiological / behavioral characteristics, such factors as age, health or mental status of the person should be eliminated from the measurement. The existing identification systems are not sufficient, the conventional methods based on the use of a personal identification number (PIN) together with user name and plastic cards are both inconvenient and unsafe. The ideal biometric based person recognition system should identify the identity of the individual uniquely or verify identification within the database accurately, reliably and most efficiently. For this reason, the system should be able to cope with problems such as inlet deterioration, environmental factors and signal mixtures, and should not change over time and be easily applicable. The most commonly used biometric feature is the most reliable iris scan while fingerprinting.

In this Project, we will work on the palm print recognition system, which is one of the physical features. Palm tracking has advantages over other biometric features. The required images are collected with a low-cost operation and the image does not cause any deterioration; False Accept Rate and False Reject Rate are reasonable values. Incorrect acceptance and rejection rates for a system are part of the total number of identification attempts for total false acceptance / rejection..

CHAPTER TWO

LITERATURE REVIEW

2.1.Background

In this chapter, I will talk about the approaches of palm print. Individual validation utilizing palmprint pictures has gotten extensive consideration the most recent 5 years and various methodologies have been proposed in the writing. The accessible methodologies for palmprint validation can be isolated into three classifications essentially on the premise of separated highlights; (I) surface based methodologies (ii) line-based methodologies, and (iii) appearance-based methodologies. The depiction of these methodologies is past the extent of this paper. However a synopsis of these methodologies with the run of the mill references can be found in Table 1. Analysts have indicated promising outcomes on inked pictures, pictures procured specifically from the scanner and pictures procured from advanced camera utilizing compelled pegged setup. However endeavors are as yet required to enhance the execution of unconstrained pictures gained from sans peg setup. Accordingly this paper uses such pictures to research the execution change. An outline of earlier work in Table 1 demonstrates that there has not been any endeavor to explore the palmprint confirmation utilizing its numerous portrayals [2].

A few coordinating score level combination methodologies for joining different biometric modalities have been displayed in the writing. It has been demonstrated that the execution of various combination methodologies is unique. Be that as it may, there has not been any endeavor to consolidate the choices of different score level combination techniques to accomplish execution change. The association of rest of this paper is as per the following; Section 2 depicted the square outline of the proposed framework. This area likewise points of interest include extraction strategies utilized in the tests. Area 3 subtle elements the coordinating paradigm and the proposed combination technique. Analysis comes about and their exchange show up in Section 4. At last the finishes of this work are compressed in Section.

2.2. Proposed Systems

Unlike previous work, we propose an alternative approach to palmprint authentication by the simultaneous use of different palmprint representations with the best pair of fixed combination rules. The block diagram of the proposed method for palmprint authentication using the combination of multiple features is shown in Fig. 1. The hand image from every user is acquired from the digital camera. These images are used to extract region of interest, i.e. palmprint, using the method detailed in Ref. [5]. Each of these images is further used to extract texture-, line- and appearance-based features using Gabor filters, Line detectors, and principal component analysis (PCA) respectively. These features are matched with their respective template features stored during the training stage. Three matching scores from these three classifiers are combined using fusion mechanism and a combined matching score is obtained, which is used to generate a class label, i.e., genuine or imposter, for each of the user. The experiments were also performed to investigate the performance of decision level fusion using individual decisions of three classifiers. However, the best experimental results were obtained with the proposed fusion strategy which is detailed in Section 4.

Figure 2.1: Block diagram for personal authentication using palmprint

2.2.1. Gabor Features

The surface highlights separated utilizing Gabor channels have been effectively utilized in unique mark characterization, penmanship acknowledgment and as of late in palmprint. In spatial area, an even-symmetric Gabor channel is a Gaussian capacity tweaked by a situated cosine work [3]. The motivation reaction of even-symmetric Gabor channel in 2-D plane has the accompanying general frame:

In this work, the parameters of Gabor filters were empirically determined for the acquired palmprint images. If we filter the image with Gabor filter, we get:

where ‘∗’ indicates discrete convolution and the Gabor channel cover is of size W×W. Accordingly every palmprint picture is sifted with a bank of six Gabor channels to create six separated pictures. Each of the separated pictures emphasizes the unmistakable palmprint lines and wrinkles in relating bearing i.e., while lessening foundation clamor and structures in different headings. The segments of palmprint wrinkles and lines in six unique ways are caught by each of these channels. Every one of these pictures separated picture is partitioned into a few covering squares of same size. The component vector from every one of the six sifted pictures is framed by figuring the standard deviation in every one of these covering squares. This component vector is utilized to remarkably speak to the palmprint picture and assess the execution [3].

Figure 2.2: Partial-domain representation

2.2.2. Extraction of line features

Palmprint distinguishing proof utilizing line highlights has been accounted for to be effective and offers high precision. The extraction of line highlights utilized as a part of our tests is same as definite in [4]. Four directional line indicators are utilized to test the palmprint wrinkles and lines arranged at each of the four bearings, i.e. 0 ◦, 45◦, 90◦ and 135 ◦. The spatial degree of these covers was experimentally settled as 9 × 9. The resultant four pictures are consolidated by voting of dark level extent from relating pixel position. The joined picture speaks to the consolidated directional guide of palm-lines and wrinkles in the palmprint picture. This picture is additionally partitioned into a few covering square pieces. The standard deviation of dim level in every one of the covering squares is utilized to shape the element vector for each palmprint picture[2].

The proposed palmprint verification technique was examined on a dataset of 100 clients. This informational index comprises of 1000 pictures, 10 pictures for each client, which were procured from advanced camera utilizing unconstrained sans peg setup in indoor condition. Fig. 5 indicates normal procurement of a hand picture utilizing the advanced camera with live criticism. The hand pictures were gathered over a time of 3 months from the clients in the age group of 16– 50 years. The hand pictures were gathered in two sessions from the volunteers, which were not very helpful. Amid picture procurement, the clients were just asked for to ensure that (I) their fingers don’t touch each other and (ii) the vast majority of their hand (rear) touches the imaging table. The mechanized division of locale of intrigue, i.e. palmprint, was accomplished by the technique itemized in Ref. [5]. Hence the palmprint picture of 300 × 300 pixels were acquired and utilized in our analyses. Every one of the obtained pictures was further histogram evened out.

2.2.3. Extraction of PCA features

The data substance of palmprint picture additionally comprises of certain nearby and worldwide highlights that can be utilized for distinguishing proof. This data can be extricated by enrolling the varieties in an ensemble of palmprint pictures, autonomous of any judgment of palmprint lines or wrinkles. Each N × N pixel palmprint picture is spoken to by a vector of 1 × N2 measurement utilizing line requesting. The accessible set of K preparing vectors is subjected to PCA which creates an arrangement of orthonormal vectors that can ideally speak to the data in the preparation dataset. The covariance framework of standardized vectors j can be gotten as takes after [2]:

2.3. Matching criterion

The grouping of separated component vectors utilizing each of three techniques is accomplished by closest neighbor (NN) classifier. The NN classifier is a basic nonparametric classifier which figures the base separation between the include vector of obscure example g and that of for gm in the mth class [5]:

where and individually speak to the nth part of highlight vector of obscure example and that of mth class. Every one of the three capabilities got from the three unique palmprint portrayals were explored different avenues regarding each of the over three separation measures (8)– (10). The separation measure that accomplished best execution was at long last chosen for the order of capabilities from the comparing palmprint portrayal.

The combination system goes for enhancing the joined grouping execution than that from single palmprint portrayal alone. There are three general techniques for joining classifiers; at include level, at score level and at choice level. Because of the expansive and shifting measurement of include vectors, the combination approach at highlight level has not been considered in this work. An outline [20] of utilized approaches for multimodal combination recommends that the score level combination of capabilities has been the most well-known approach for combination and has appeared to offer noteworthy change in execution. The objective of assessing different score level combination techniques is to create most ideal execution in palmprint verification utilizing given arrangement of pictures. Let LGabor(g, gm),LLine(g, gm) and LPCA(g, gm) signify the coordinating separation delivered by Gabor, Line and PCA classifiers separately. The joined coordinating score LC(g, gm) utilizing the outstanding settled tenets can be acquired

Figure 2.3: Combination of Gabor, Line, and PCA

I is the chosen consolidating guideline, i.e. I speaks to most extreme, aggregate, item or least lead (shortened as MAX, SUM, PROD and MIN separately), assessed in this work. One of the deficiencies of settled guidelines is the supposition that individual classifiers are autonomous. This supposition might be poor, particularly for the Gabor and Line based highlights. In this way SUM manage can be better option for uniting coordinating scores while joining Gabor furthermore, Line highlights. These merged coordinating scores can be additionally joined with PCA coordinating scores utilizing PROD manage (Fig. 3) as the PROD administer is evaluated to perform better on the supposition of free information portrayal [17]. The individual choices from the three palmprint portrayals were additionally joined (dominant part voting) to look at the execution change. The exhibitions of different score level combination techniques are unique. Along these lines the execution from straightforward crossover combination methodology that joins choices of different settled score level combination plans, as appeared in Fig. 4, was likewise examined in this work. of utilizing settled blend manages, the coordinating scores from the preparation set can likewise be utilized to adjust a classifier for two class, i.e. real and fraud, characterization. Hence the consolidated arrangement of three coordinating scores utilizing feed forward neural system (FFN) and bolster vector machine

(SVM) classifier has additionally been explored [5].

Figure 2.4: Hybrid fusion scheme

2.4. Image Acquisition & Alignment

Our picture procurement setup is innately basic and does not utilize any exceptional enlightenment (as in [3]) nor does it utilize any pegs to make any bother clients (as in [20]). The Olympus C-3020 computerized camera (1280 ‘ 960 pixels) was utilized to get the hand pictures as appeared in figure 2. The clients were just asked for to ensure that (I) theirs.

2.4.1. Extraction of hand geometry images

Every one of the gained pictures should be adjusted a favored way in order to catch the same highlights for coordinating. The picture thresholding activity is utilized to acquire a parallel hand-shape picture. The edge esteem is consequently figured utilizing Otsu’s technique [25]. Since the picture foundation is steady (dark), the limit esteem can be processed once and utilized in this manner for different pictures. The binarized state of the hand can be approximated by an oval. The parameters of the best-fitting circle, for a given double hand shape, is registered utilizing the minutes [26]. The introduction of the binarized hand picture is approximated by the significant pivot of the oval and the required edge of turn is the diffe rence amongst typical and the introduction of picture [6]. As appeared in figure 3, the binarized picture is turned and utilized for registering the hand geometry highlights. The assessed introduction of binarized picture is additionally used to turn dim level hand picture, from which the palmprint picture is extricated as nitty gritty in the following subsection.

Figure 2.5: Extraction of two biometric modalities from the hand image

2.4.2. Extraction of palmprint images

Each binarized hand-shape picture is subjected to morphological disintegration, with a known double SE, to register the locale of intrigue, i.e., the palmprint. Give R a chance to be the arrangement of non-zero pixels in a given parallel picture and SE be the arrangement of non-zero pixels, i.e., organizing component. The morphological disintegration is characterized as

where SE g signifies the organizing component with its reference point moved by g pixels. A square organizing component (SE) is utilized to test the composite binarized picture. The inside of twofold hand picture after disintegration, i.e., the focal point of rectangle that can encase the deposit is resolved. This inside directions are utilized to remove a square palmprint locale of settled measure as appeared in figure 3.

2.4.3. Extraction of hand geometry features

The two fold image‡ as appeared in figure 3(c), is utilized to register critical hand geometry highlights. An aggregate of 16 hand geometry highlights were utilized (figure 5); 4 finger lengths, 8 finger widths (2 widths for each finger), palm width, palm length, hand region, and hand length. In this way, the hand geometry of each hand picture is portrayed by a component vector of length 1×16. The various bits of confirmations can be consolidated by various data combination techniques that have been proposed in the writing. With regards to biometrics, three levels of data combination plans have been recommended; (I) combination at portrayal level, where the element vectors of different biometric are connected to shape a joined highlight vector, (ii) combination at choice level, where the choice scores of different biometric framework are consolidated to produce a ultimate conclusion score, and (iii) combination at dynamic level, where numerous choice from different biometric frameworks are combined. The first two combination plans are more important for a bimodal biometric framework and were considered in this work.

Figure 2.6: Hand geometry feature extraction

CHAPTER FOUR

IMPLEMENTATION

4.1. Background

In this project …. .

CHAPTER FIVE

CONCLUSION

5.1. Background

References

A. Kumar, D. C. Wong, H. C. Shen, and A. K. Jain, “Personal verification using palmprint and hand geometry biometric,” in International Conference on Audio-and Video-Based Biometric Person Authentication, 2013, pp. 668-678.
A. Kumar and D. Zhang, “Personal authentication using multiple palmprint representation,” Pattern Recognition, vol. 38, pp. 1695-1704, 2010.
S. Pathania, “Palm Print: A Biometric for Human Identification,” 2016.
J. Kodl and M. Lokay, “Human Identity, Human Identification and Human Security,” in Proceedings of the Conference on Security and Protection of Information, Idet Brno, Czech Republic, 2010, pp. 129-138.
S. Sumathi and R. R. Hemamalini, “Person identification using palm print features with an efficient method of DWT,” in Global Trends in Information Systems and Software Applications, ed: Springer, 2012, pp. 337-346.
H. J. Asghar, J. Pieprzyk, and H. Wang, “A new human identification protocol and Coppersmith’s baby-step giant-step algorithm,” in International Conference on Applied Cryptography and Network Security, 2010, pp. 349-366.

2018-4-25-1524696117

Dental implications of eating disorders

Eating disorders are a type of psychological disorder characterised by abnormal or unhealthy eating habits, usually linked with restrictive food intake . The cause of onset cannot be linked to one reason alone as it is believed that there are multiple contributing factors, including biological, sociocultural and psychological influences. The sociocultural influences are linked with western beauty ideals that have recently been engraved into modern society, due to the increasing importance of social media and its dictation of the ideal body type. Studies show that eating disorders are significantly less common within cultures that have yet to be exposed to these ideals . The most commonly diagnosed disorders include anorexia nervosa (AN) and bulimia nervosa (BN), affecting more women than men (both AN and BM occur in ratios of 3:1, females to males) . Anorexia is a persistent restriction of energy intake and can be linked with obsessive behaviours that stem from severe body dysmorphia and fear of gaining weight. Bulimia is defined by repeated episodes of binge eating followed by measures to prevent weight gain such as forced regurgitation of stomach contents. The effects on an individual’s mental and physical health are commonly recognised, however dental implications may often be overlooked. The unknown cause of eating disorders increases the extent of their significance on dental health because without a definite root cause, it is difficult and sometimes even impossible to ‘cure’ an eating disorder ,thus, preventing the dental implications is more difficult. Association between eating disorders and oral health problems was initially reported in the late 1970’s therefore the established link is relatively recent . Oral complications may be the first and sometimes only clue to an underlying eating disorder. In the US, 28% of all bulimic patients were first diagnosed with bulimia during a dental appointment and this highlights the visibly clear and distinct impact that eating disorders can have on teeth . This report will investigate the main dental implications that may be caused by eating disorders. The significance will be analysed by looking at what causes the dental problems and how greatly these can be linked directly to eating disorders. The extent of significance will be analysed through looking at the extent of impact and whether these impacts are permanent or reversible.

Oral manifestations of nutritional deficiencies

Anorexia Nervosa is characterised by restriction of food intake and an extreme fear of weight gain, therefore it is common that sufferers are often malnourished and vitamin deficient. Aside from obvious health risks, these factors also lead to several oral manifestations. However, dietary patterns show great variability and will usually differ dependent on the individual. Dietary patterns include calorie restricting, eating healthily but at irregular intervals, binge eating, vomiting and fasting for prolonged periods . Therefore, there are limitations to the conclusions we can draw as to the significance of the effects on oral health, since there is an inconsistency in the contents and habits of daily food consumption. When calorie restriction is involved, as an attempt to keep major bodily functions running steadily, the body will attempt to salvage protein, vitamins and other nutrients and consequently, oral maintenance will be neglected . Studies show that patients with anorexia presented diets containing significantly lower values of all major nutrients compared with controls and specifically, intakes of vitamin A, vitamin C and calcium below RDA levels (recommended dietary values) were present in the majority of patients. However, low intakes (below RDA values) of vitamin B1, B2 and B3 were only reported in a few cases . In contrast to these findings, another source states that there is a clear reduced intake of B vitamins in anorexic and bulimic patients . A possible explanation for these results may be due to the previously discussed inconsistency in the daily intake of individuals with eating disorders but overall we can assume that nutrient deficiencies with varying severities are present in the majority of the anorexic population. The common deficiencies: vitamin D, vitamin C, vitamin B and vitamin A are associated with certain disturbances in the oral structure because they are essential for maintaining good oral health. A lack of vitamin A is related to enamel hypoplasia, which consists of horizontal or linear hypoplastic grooves in the enamel. Vitamin B deficiencies cause complications such as a painful, burning sensation of the tongue, aphthous stomatitis (benign mouth ulcers) and atrophic glossitis (smooth, glossy appearance of the tongue and is often tender). A lack of Vitamin A is responsible for infections in the oral cavity as the deficiency can lead to the loss of salivary gland function (salivary gland atrophy) which acts to reduce the defense capacity of the oral cavity, as well as inhibiting its ability to buffer the plaque acids. Inability to buffer these plaque acids could lead to an increased risk of dental caries. Additionally, vitamin B deficiencies can induce angular cheilitis; a condition that can last from days to years and is consists of inflammations focused in the corners of the mouth, causing irritated, red and itchy skin; often accompanied by a painful sensation. There is a consistency in the evaluation of calcium deficiencies among sufferers of eating disorders and this has clear significant impact on oral health. There is an established relationship between calcium intake and periodontal diseases therefore having an eating disorder increases a person’s susceptibility . The process of building density in the alveolar bone that surrounds and supports the teeth is primarily reliant on calcium. Alveolar bone cannot grow back so calcium is needed to stimulate its repair. This is important because the loss of alveolar bone can expose sensitive root surfaces of teeth, which can progress to further oral complications . If patients are not absorbing enough vitamin C, after an extended period there is a chance that they will develop osteoporosis. Although this is rare and most common amongst individuals with anorexia, it can lead to serious consequences because alongside the loss of density in the alveolar bone, it can progress to the loosening and eventually, loss of teeth: a permanent defect. With anorexic and bulimic patients, there is an increased likelihood of halitosis (bad breath) because in the absence of necessary vitamins and minerals, the body is unable to maintain the health of the oral cavity . If the vitamin C deficiency that most patients with eating disorders suffer from is prolonged and sufficiently severe, then there is a risk of scurvy development. In general, therefore, it seems that the nutritional deficiencies caused by anorexia and bulimia are significantly impacting oral health in ways ranging from unpleasant breath and physical defects to permanent loss of oral structures that need to be tackled with medical and cosmetic interventions.

Periodontal disease

As explained earlier when discussing calcium deficiency, the risk of periodontal disease may increase if an individual suffers from an eating disorder. General malnourishment is another factor that causes a quicker onset of periodontal disease, which always begins with gingivitis and only occurs in the presence of dental plaque . As discussed above, the relationship between calcium intake and periodontal disease is potentially controversial, except in rare cases of severe nutritional deficiency states. Patients dealing with extreme cases of anorexia nervosa may fall under this category. Due to the intense psychological nature of this disorder, the extremity of food restriction is likely to progress further as the need to lose weight quickly transforms into an addiction. After studying nutritionally deficient animals, the conclusions drawn suggest that nutritional factors alone are not capable of initiating periodontal diseases but are able to have an effect on their progression . This would suggest that having an eating disorder does not place an individual at greater risk of initiating periodontal diseases compared to an average person, despite their malnourished conditions. However, catalysing the progression of gingivitis into periodontal disease does suggest that having an eating disorder places patients at a significantly greater risk because their untreated gingivitis will evolve into periodontitis at a greater rate. This effect is significant because periodontitis is an irreversible condition that causes permanent damage. The evidence is limited; however, as it is based on animal research and may only accurately correspond to humans at a limited degree.

Turning now to the experimental evidence on the idea that dental plaque is an essential etiological agent in chronic periodontal diseases. It has been proven through experiments involving the isolation of human plaque and the introduction of the plaque bacteria into the mouths of gnotobiotic animals that a link exists between the bacteria in dental plaque and periodontal disease. Supporting this idea, epidemiological studies produced evidence to suggest a strong positive correlation between dental plaque and the severity of periodontal disease. Unlike some previous evidence mentioned, different clinical experiments done on both animals and humans show major findings that the accumulation of dental plaque is a result of withdrawing oral hygiene in initially healthy mouths . There is evidence to suggest that bulimics manifest a significantly higher retention of dental plaque so consequently, this disorder put patients at a greater risk of not only advancement into periodontal disease, but an increased risk of severe periodontal disease . As mentioned earlier, periodontal disease only occurs after the development of gingivitis, which consists of three stages: initial lesion, early lesion and established lesion. When an advanced lesion is present, it corresponds to chronic periodontitis: “a disease characterized by destruction of the connective tissue attachment of the root of the tooth, loss of alveolar bone, and pocket formation” . After discussing the increased likelihood of dental plaque being present in the mouths of bulimics, the strong association between dental plaque and periodontal disease can be linked directly to prove the significance of bulimia’s effects on oral health. Although the evidence is not as conclusive, anorexic patients are liable to malnourishment and since nutritional factors aid the development of gingivitis into periodontal disease, there is a significantly increased chance of anorexic patient’s oral condition transitioning from gingivitis to periodontal disease. This is extremely significant because unlike gingivitis, the oral damage of periodontal disease will be irreversible.

Eating disorders and caries

This increased likelihood of periodontal disease means that an individual is more likely to retain dental plaque, a significant factor that contributes to dental caries. Tooth decay (also known as dental caries), is defined as “the demineralisation of the inorganic part of the tooth structure with the dissolution of the organic substance”. It involves the anaerobic respiration of consumed dietary sugars where the organic acids formed in the dental plaque can demineralise the enamel and dentine . A possible contributing factor to dental caries is a common unhealthy habit adopted by people with eating disorders that involves the consumption of acidic drinks containing zero calories, an example being coke zero. According to professor colon, certain patients will drink as much as 6 litres a day in an attempt to reduce hunger and help with the process of SIV (self-induced vomiting). During episodes of “binge eating” (more common with bulimia), an individual will consume large amounts of food, usually high in sugar or fat within a short timeframe, usually with the intention of regurgitating the contents shortly afterwards. Increased amounts of sugary foods are ingested during this period, leading to an increased risk of dental caries . A study shows that prolonged periods of dietary restraint in anorexic patients did not result in changes to the bacteria associated with dental caries and consequently allows us to understand that malnourishment is not a significant factor when it comes to the risk of dental caries . Due to obsessive personality traits seen in anorexic patients, it is likely that these individuals are more fastidious in their oral hygiene, which discards dental caries as a risk compared to other complications such as dental erosion, which is to be explored later on. Although dental caries does not seem to arise as a direct issue, studies show that patients with anorexia had greater DMFS scores (decayed, missing and filled surfaces) than controls . This is likely a cause of previous factors such as the consumption of low calorie acidic drinks, not the restricted dietary intake.

Bulimia seems to place individuals at a significantly greater risk of dental caries than anorexia. A study of 33 females showed that bulimics had more intense caries when compared to healthy, age and sex matched controls . Another more recently discovered habit is CHSP (chewing and spitting) where an individual can seemingly “enjoy” the taste of certain foods by chewing the food for some time before proceeding to spit it out to avoid consuming any calories. A study shows that 34% of hospitalized eating disorder patients admitted to at least one episode of chewing and spitting in the month prior to admission . This habit can significantly increase dental problems by leading to cavities and tooth decay, presumably due to the high probability of excess residual carbohydrates. This assumption derives from the etiology of how dental caries progresses, which involves the action of acids on the enamel surface. When dietary carbohydrates react with bacteria present in the dental plaque, the acid formed initiates the process of decalcifying tooth substance and subsequently causes disintegration of the oral matrix. Abundant extracellular polysaccharides can increase the bulk of plaque inside the mouth, which interferes with the outward diffusion of acids and the inward diffusion of saliva. Since saliva has buffering properties and acts as a defence against caries by maintaining Ph, interference with the abundance of reduces defence against tooth decay. Dietary sugars diffuse rapidly through plaque and are converted to acids by the bacterial metabolism. Acid is generated within the substance of plaque to such an extent that enamel may dissolve and enamel caries leads to cavity formation. Binge eating or CHSP increases the acidity of plaque since ten minutes after ingesting sugar, the Ph of plaque may fall as much as two units. To support the scientific explanation, there is evidence supporting the association between carbohydrate intake and dental caries. For example, the decrease in prevalence of dental caries during WWII due to sucrose shortages followed by a rise in previous levels during the post-war period, following the increased availability of sucrose. Hopewood House (a childrens home) excluded sucrose and white bread from the diet: children had low caries rates which increased dramatically when they moved out. Alongside this, intrinsic factors such as tooth position, tooth morphology and enamel structure also affect the risk of caries development and this does not link directly to eating disorders because these variables differ throughout the whole population. However, an extrinsic factor that may reduce the incidence of caries is a greater proportion of fat in the diet because phosphates can reduce the cariogenic effect of sugar . Since individuals with anorexia generally avoid foods with high fat content, they are unlikely to ingest the necessary amount of phosphates to reduce their risk of caries. The evidence all relates to the significance of eating disorders (specifically bulimia) and the role they play to increase the likelihood of caries due to incidences of binge eating, CHSP, low fat intake and consumption of acidic drinks high in sugar.

Oral consequences of medication

After discussing dental caries, it is evident that saliva plays an important role in the maintenance of a healthy oral cavity. 20 women with bulimia and 20 age and gender matched controls were studied and the results showed that the unstimulated whole saliva flow rate (UWS) was reduced in the bulimic group, mainly due to medication . Although the UWS was affected, no major compositional salivary changes were found. This information is contrasted by another study that found bulimic patients did not present evidence of lower salivary flow rates but did have more acidic saliva . Another study was conclusive with the first one and found that the stimulated and resting salivary flow was poor amongst bulimic individuals compared to healthy controls. It also found that Ph levels of saliva were lower than the control group but were still within the normal range . Due to the range in findings and the limited sample size in the studies, these results are inconclusive in places and need to be interpreted with caution. However, it would make sense that habits that accompany eating disorders such as fasting or vomiting would potentially cause dehydration and result in a lower UWS.

Although we are unable to determine a strong link between eating disorders and their effect on saliva, there is conclusive evidence to support oral reactions to medication. If an eating disorder has been diagnosed, selective serotonin reuptake inhibitors such as fluoxetine (a common antidepressant), anti-psychotics and anti-cholinergic medication may be prescribed . Smith and Burtner (1994) found that 80.5% of the time, xerostomia (dry mouth) was a side effect of medications. Direct oral effects of xerostomia include diminishment or absence of saliva as well as alterations in saliva composition. These medications also have indirect effects on oral health by causing lethargy, fatigue and lack of motor control which can cause impairments in an individual’s ability to practice a good oral hygiene technique. The medications have anticolinergic of antimuscaric effects which block the actions of the parasympathetic system by inhibiting the effects of its neurotransmitter, acetylcholine on the salivary gland receptors meaning that it cannot bind to its receptors and consequently, the salivary glands cannot secrete saliva. The reason this causes such an immense impact on oral health is because of how important the functions of saliva in the mouth are. They include protection of the oral mucosa, chemical buffering (as mentioned previously when discussing dental caries), digestion, taste, antimicrobial action and maintenance of teeth integrity. Saliva contains glycoproteins that increase its viscosity and helps form a protective barrier against microbial toxins and minor trauma, protecting oral health both chemically and physically. However, a study by Nagler (2004) found that in up to one third of cases, xerostomia does not lead to a real reduction in salivary flow rate therefore this is a limitation to consider . Patients with xerostomia may experience difficulty chewing, swallowing or speaking and salivary glands may swell intermittently or chronically. Physical defects include cracked, peeling lips, a smoothed, reddened tongue and a thinner, reddened oral mucosa (the membrane lining the inside of the mouth). There are links between xerostomia and previously discussed oral complications as there was often a marked increase in caries and patients experiencing dry mouth where tooth decay could be rapid and progressive even in the presence of excellent hygiene . Overall, the extent of the impact caused by eating disorders in respect to xerostomia and a decreased salivary flow rate is fairly minimal for a few reasons. First, evidence related to salivary flow rate is inconclusive and there are several contrasting studies, therefore a confident assumption linking eating disorders to salivary flow rate cannot be made. On the other hand, there is a handful of strong evidence to suggest that xerostomia can be caused by medication which can then affect the flow of saliva, however, in terms of eating disorders, the link is weak and not exclusive. This is due to the simple fact that medication is taken by a large sum of the population for different conditions ranging from depression to heart disease. Therefore, eating disorders are not uniquely responsible for causing xerostomia. As well as this, xerostomia is a secondary effect because it is the medication that is responsible for the oral complication, not the psychological disorder. This gives reason to infer that eating disorders do not have a highly significant impact on this aspect of oral health.

Self-induced vomiting:

The most common symptom associated with bulimia is the binge-purge cycle. This involves an individual consuming large quantities of food in a short time period (binging), followed by an attempt to not gain weight by making themselves vomit or taking laxatives (purging). Linked with the previously discussed issue of xerostomia, since laxatives are medication, frequent use will significantly increase a patient’s likelihood of alterations in saliva contents and flow rate, which can lead to more significant dental issues. A case study evaluates a 25-year-old female patient who had suffered with bulimia for five years. It was found that this particular individual vomited 5-7 times per day and suffered from swelling on both sides of her face and mandible (