Integration Of Nuclear Energy

Over the course of many decades, the human population has attempted several approaches to satisfy the demand for energy. As the population continues to grow exponentially, our alternatives to obtaining energy, from our houses to our jobs, have changed. In the spectrum of energies that exist today, there has been one that is the most controversial of all, due to the human instinct of fear for the worst that can happen when being used and produced. This is the manufacturing of nuclear energy, which is the energy released during nuclear fission. It delivers electricity to cities all over the U.S. accounting for 20% of the used nuclear energy in the world. Nuclear energy is sometimes seen as a danger to society due to the 57 accidents that have occurred since the Chernobyl disaster such as in Marcoule (2011) and in Fukushima (2011). Furthermore, radioactive material can potentially be disseminated, as radiation is harmful and can, even in small quantities, be lethal. Contamination with radioactive material can make an entire region uninhabitable for thousands of years. However, despite nuclear energy’s negative reputation due to the many devastations that have occurred amongst the human population, ultimately nuclear energy is a better alternative, as humanity must face the reality that it cannot depend indefinitely on combustion of coal, gas, and oil, for most of its energy needs. Therefore, nuclear energy has to play a huge role in this necessary transformation of the 21st-century energy-supply system.

The Nuclear Revolution

In the mid 20th century, society speculated the idea of nuclear energy as a viable energy source that would produce enormous and beneficial results to the world in general due to its rapid expansion. The development of nuclear energy in the second half of the 20th century progressed through several stages: theoretical development by physicists, military application as atomic weapons during World War II, commercialization by the electrical industry in several industrialized nations, proliferation (for military and non military uses) among less developed nations, crises spawned by power plant accidents, cost overruns, and public protests. As a result of the nuclear energy development through the 60s and 70s, many nuclear reactors were built for producing electricity, using designs similar to those made on submarines. They worked well and produced cheap, emission-free electricity with a very low mining transportation footprint. Subsequently, due to the low mining footprint that nuclear energy had on the environment through the 1980s, President Ronald Reagan, raised the budget for nuclear energy 36%, to 1.6 billion, reducing every other Department of Energy Program. Moreover, because of its great demand that nuclear energy had on society, the U.S. population envisioned the great expansion of nuclear energy throughout the nation. For instance, Barry Brook ARC Australian Laureate Professor and Chair of Environmental Sustainability at the University of Tasmania asserts, “ From these developments of the 20th century, an entire industry emerged that has led to 435 operating nuclear power reactors(as of late 2014), 72 under construction, and 174 more on order or planned, as well as numerous research factors around the world…” Furthermore, after WWII, nuclear energy greatly expanded throughout the U.S. and has been a sustainable resource for humans to rely on for 80 years and counting. It has promoted for major advances in many fields such as in medicine, weaponry, and other fields. In addition, nuclear energy has played an important role in keeping humanity out of the energical crisis as William Parker, member of the Institute of Physics mentions: “It has been shown energy released from uranium per gram is much more than that of fuels such as oil or coal; approximately 8,000 times more efficient.” Clearly, the expansion and development of nuclear energy has greatly influenced not only our nation and cities, but also our daily lives.

The Power of Nuclear Energy

Over time, nuclear energy has expanded to meet the demand of energy for the U.S. population, delivering clean energy and a large number of products and services for use in human activities, including medical diagnosis /therapy, industry, and agriculture. In addition, nuclear energy is not only limited to the generated electricity, but may equally well be used for such important tasks as desalination, production of hydrogen, space heating, process-heat applications in industry, and for extraction of carbon from CO2 to combine with hydrogen to create synthetic liquid fuels. Moreover, after the accident in Chernobyl, the United Nations increased the role of IAEA as an auditor of world nuclear safety. According to the World Nuclear Association, an international organization that promotes nuclear power asserts, “ The IAEA prescribes safety procedures and the reporting of minor incidents. Its role has been strengthened since 1996. Every country which operates nuclear power plants has a nuclear safety inspectorate and all of these work closely with the IAEA.” The safety regulations upon nuclear energy were strictly enforced by policymakers to promote safety in response to public fears of nuclear energy. Nonetheless, due to these strict regulations, nuclear energy for electricity generation can be considered extremely safe compared to coal mines as several thousand people die every year to provide fuel for electricity. Subsequently, to date, no deaths or radiation exposure was caused by the meltdown of Fukushima.

Limitations

However, despite nuclear energy being an effective resource in the U.S. to battle the consumption of greenhouse gases, with it, comes many limitations. One problem with nuclear energy is the lack of reserves that we currently sustain. According to Daniel B. Botkin, Professor Emeritus in the Department of Ecology, Evolution, and president of the Center for the Study of the Environment states, “ Best estimates put the amount of uranium that can be mined economically at about 5.5 million metric units…today’s nuclear plants use 70,000 metric tons a year of uranium…at this rate uranium would last about 80 years.” This lack of efficiency for a long term use of nuclear energy makes society prone to accepting fossil fuels due to its cheap and abundance. Consequently, nuclear energy can’t stand for the demand of energy for the U.S. as fossil fuels account for 80% of the energy consumed in the U.S. in 2017, while nuclear energy accounts for only 9%. Furthermore, nuclear energy, is not only an unreliable resource, but it has caused severe devastations such as the one experienced in Chernobyl were it resulted in widespread radioactive contamination in regions like Belarus, Russia, and Ukraine inhabited by several millions of people. Devastations like these have led to a challenge to convince an often reluctant public that with new waste disposal techniques, nuclear energy is worth a second look in the interests of sustainable development.

Regulation on Nuclear Power

Nuclear energy has proven to be effective in the growing population of the U.S. As a result, nuclear energy could be acknowledged as a “stepping stone” from non-renewable to renewable energy. A solution for the advancement of nuclear energy in the U.S. is to implement more safety regulations upon the nuclear reactors. Nuclear energy needs to be safe in order for people to trust it and move from fossil fuels to a low-emission source of energy, such as nuclear energy. Furthermore, nuclear energy will help transition us into renewable sources as it will give us time to modify and perfect sources such as sources that aren’t yet efficient for the demand of energy. Moreover, in order to implement the idea of nuclear energy globally and the U.S., the world’s industrial nations should take the lead in transforming the major part of their stationary electrical energy-generating capacity from fossil-fuel-based to nuclear-fission-based. With a long-term energy policy and proper incentives, this could be achieved within a few decades (as was already done by France). Overall, such a transformation could drastically reduce the global rate of greenhouse-gas emission with respect to both atmospheric carbon dioxide and methane.

2019-2-5-1549351100

Is Nuclear Power a viable source of energy?: college application essay help

6th Form Economics project:

Nuclear power, the energy of the future of the 1950s, is now starting to feel like the past. Around 450 nuclear reactors worldwide currently generate 11% of the world electricity, or approximately 2500 TWh in a year, just under the total nuclear power generated globally in 2001 and only 500 TWh more than in 1991. The number of operating reactors worldwide has seen the same stagnation, with an increase of only 31 since 1989, or an annual growth of only 0.23% compared to 12.9% from 1959 to 1989. Most reactors, especially in Europe and North America, where built before the 90s and the average age of reactors worldwide is just over 28 years. Large scale nuclear accidents such as Chernobyl in 1986 or, much more recently, Fukushima in 2011 have negatively impacted public support for nuclear power and helped cause this decline, but the weight of evidence has increasingly suggested that nuclear is safer than most other energy sources and has an incredibly low carbon footprint, causing the argument against nuclear to shift from concerns about safety and the environment to questions about the economic viability of nuclear power. The crucial question that remains is therefore about how well nuclear power can compete against renewables to produce the low carbon energy required to tackle global warming.

The costs of most renewable energy sources have been falling rapidly and increasingly able to outcompete nuclear power as a low carbon option and even fossil fuels in some places; photovoltaic panels, for example, have halved in price from 2008 to 2014. Worse still for nuclear power, it seems that while costs of renewable energy have been falling, plans for new nuclear plants have been plagued with delays and additional costs: in the UK, Hinkley Point C power station is set to cost £20.3bn, making it the world’s most expensive power station, and significant issues in the design have raised questions as to whether the plant will be completed by 2025, it’s current goal. In France, the Flamanville 3 reactor is now predicted to cost three times its original budget and several delays have pushed the start up date, originally set for 2012, to 2020. The story is the same in the US, where delays and extra costs have plagued the construction of the Vogtle 3 and 4 reactors which are now due to be complete by 2020-21, 4 years over their original target. Nuclear power seemingly cannot deliver the cheap, carbon free energy it promised and is being outperformed by renewable energy sources such as solar and wind.

The crucial and recurring issue with nuclear power is that it requires huge upfront costs, especially when plants are built individually, and can only provide revenue years after the start of construction. This means that investment into nuclear is risky, long term and cannot be done well on a small scale, though new technologies such as SMRs (Small Modular Reactors) may change this in the coming decades, making it a much bigger gamble. Improvements in other technologies over the period of time a nuclear plant is built means that is often better for private firms, who are less likely to be able to afford large scale programs enabling significant cost reductions or a lower debt to equity ration in their capital structure, to invest in more easily scalable and shorter term energy sources, especially with subsidies favouring renewables in many developed countries. All of this points to the fundamental flaw of nuclear: that it requires going all the way. Small scale nuclear programs that are funded mostly with debt, that have high discount rates and low capacity factors as they are switched off frequently will invariably have a very high Levelised Cost of Energy (LCOE) as nuclear is so capital intensive.

That said, the reverse is true as well. Nuclear plants have very low operating costs, almost no external costs and the cost of decommissioning a plant are only a small portion of the initial capital cost, even with a low discount rate such as 3%, due to the long lifespan of a nuclear plant and the fact that many can be extended. Operating costs include fuel costs, which are extremely low for nuclear, costing only 0.0049 USD per kWh, and non-fuel operation and maintenance costs which are barely higher at 0.0137 USD per kWh. This includes waste disposal, a frequently cited political issue that has no longer been relevant technically for decades as waste can be reused relatively well and stored on site safely at very low costs simply because the quantity of fuel used and therefore waste produced is so small. The fuel, uranium is abundant and technology enabling uranium to be extracted from sea water would give access to a 60,000 year supply at present rates of consumption so costs from ‘resource depletion’ are also small. Finally, external costs represent a very small proportion of running costs: the highest estimates for health costs and potential accident are at 5€/MWh and 4€/MWh respectively, though some estimates fall to only 0.3€/MWh for potential accidents when past records are adjusted to try and factor in improvements in safety standards; though these vary significantly due to the fact that the total number of reactors is very small.

Nuclear power therefore remains still one of the cheapest ways to produce electricity in the right circumstances and many LCOE (Levelised Cost of Energy) estimates, which are designed to factor in all costs over the life time of a unit to give a more accurate representation of the costs of different types of energy, though they usually omit system costs, point to nuclear as a cheaper energy source than almost all renewables and most fossil fuels at low discount rates.

LCOE costs taken from ‘Projected Costs of Generating Electricity 2015 Edition’ and system costs taken from ‘Nuclear Energy and Renewables (NEA, 2012)’ have been combined by the World Nuclear association to give LCOE for four countries to compare the costs of nuclear to other energy sources. A discount rate of 7% is used, the study applies a $30/t CO2 price on fossil fuel use and uses 2013 US$ values and exchange rates. It is important to bear in mind that LCOE estimates vary widely as many assume different circumstances and they are very difficult to calculate, but it is clear from the graph that nuclear power is more than still viable; being the cheapest source in three of the four countries and third cheapest in the fourth behind onshore wind and gas.

2019-5-13-1557759917

Decision making during the Fukushima disaster

Introduction

On March 11, 2011 a tsunami struck the east coast of Japan, which resulted in a disaster at the Fukushima Daiichi nuclear power plant. During the day commencing the natural disaster many decisions were made with regards to managing the crisis. This paper will examine these decisions made during the crisis. The Governmental Politics Model, a model designed by Allison and Zelikow (1999), will be adopted to analyse the events. Therefore, the research question of this paper is: To what extent does the Governmental Politics Model explain the decisions made during the Fukushima disaster.

First, this paper will lay the theoretical basis for an analysis. The Governmental Politics Model and all crucial concepts within it are discussed. Then a conscription of the Fukushima case will follow. Since the reader is expected to already have general knowledge regarding the Fukushima Nuclear disaster the case description will be very brief. With the theoretical framework and case study a basis for the analysis is laid. The analysis will look into the decisions government and Tokyo Electric Power Company (TEPCO) officials made during the crisis.

Theory

Allison and Zelikow designed three theories to understand the outcomes of bureaucracies and decision making in the aftermath of the Cuban Missile Crisis in 1962. The first theory to be designed was the Rational Actor Model. This model focusses on the ‘logic of consequences’ and has a basic assumption of rational actions of a unitary actor. The second theory designed by Allison and Zelikow is the Organizational Behavioural Model. This model focusses on the ‘logic of appropriateness’ and has a main assumption of loosely connected allied organizations (Broekema, 2019).

The third model thought of by Allison and Zelikow is the Governmental Politics Model (GPM). This model reviews the importance of power in decision-making. According to the GPM decision making has not to do with rational/unitary actors or organizational output but everything with a bargaining game. This means that governments make decisions in other ways, according to the GPM there are four aspects to this. These aspects are: the choices of one, the results of minor games and of central games and foul-ups (Allison & Zelikow, 1999).

The following concepts are essential in the GPM. First, it is important to note that power in government is shared. Different institutions have independent bases and, therefore, power is shared. Second, persuasion is an important factor in the GPM. The power to persuade differentiates power from authority. Third, bargaining according to the process is identified, this means there is a structure in the bargaining processes. Fourth, power equals impact on outcome is mentioned in the Essence of Decision making. This means that there is a difference between what can be done and what is actually done, and what is actually done has to do with the power involved in the process. Lastly, intranational and international relations are of great importance to the GPM. These relations are intertwined and involve a vast set if international and domestic actors (Allison & Zelikow, 1999).

Not only the five previous concepts are relevant for the GPM. The GPM is inherently based on group decisions, in this type of decision making Allison and Zelikow identify seven factors. The first factor is a positive one, group decisions, when met by certain requirements create better decisions. Secondly, the agency problem is identified, this problem includes information asymmetric and the fact that actors are competing over different goals. Third, it is important to identify the actors in the ‘game’. This means that one has to find out who participates in the bargaining process. Fourth, problems with different types of decisions are outlined. Fifth, framing issues and agenda setting is an important factor in the GPM. Sixth, group decisions are not necessarily positive, they can lead to groupthink easily. This is a negative consequence and means that no other opinions are considered. Last, the difficulties in collective actions is outlined by Allison and Zelikow. This has to do with the fact that the GPM does not consider unitary actors but different organizations (Allison & Zelikow, 1999).

Besides the concepts mentioned above the GPM consists of a concise paradigm too. This paradigm is essential for the analysis of the Fukushima case. The paradigm exists of six main points. The first main point is the fact that decisions are the result of politics, this is the GPM and once again stresses the fact that decisions are the result of bargaining. Second, as said before, it is important to identify the players of the political ‘game’. Furthermore, one has to identify their preferences and goals and what kind of impact they can have on the final decision. Once this is analysed, one has to look at what the actual game is that is played. The action channels and rules of the game can be determined. Third, the ‘dominant inference pattern’ once again goes back to the fact that the decisions are the result of bargaining, but this point makes clear that differences and misunderstandings have to be taken into account. Fourth, Allison and Zelikow identify ‘general propositions’ this term includes all concepts examined in the second paragraph of the theory section of this paper. Fifth, specific propositions are looked at, these specify to decisions on the use of force and military action. Last, is the importance of evidence. When examining crisis decision making documented timelines and for example, minutes or other account are of great importance (Allison & Zelikow, 1999).

Case

In the definition of Prins and Van den Berg (2018) the Fukushima Daiichi disaster can be regarded as a safety case, this is because it was an unintentional event that caused harm to humans.

The crisis was initiated by an earthquake of 9.0 on the Richter scale which was followed by a tsunami, which waves reached a height of 10 meters. Due to the earthquake all external power lines, which are needed for cooling the fuel rods, were disconnected. Countermeasures for this issue were in place, however, the water walls were unable to protect the nuclear plant from flooding. This caused the countermeasures, the diesel generators, to be inadequate (Kushida, 2016).

Due to the lack of electricity, the nuclear fuel rods were not cooled, therefore, a ‘race for electricity’ started. Eventually the essential decision to inject sea water was made. Moreover, the situation inside the reactors was unknown. Meltdowns in reactors 1 and 2 already occurred. Because of explosions risks the decision to vent the reactors was made. However, hydrogen explosions materialized in reactors 1,2 and 4. This in turn led to the exposure of radiation to the environment. To counter the disperse of radiation the decision to inject sea water to the reactors was made (Kushida, 2016).

Analysis

This analysis will look into the decision or decisions to inject seawater in the damaged reactors. First, a timeline of the decisions will be outlined to further build on the case study above. Then the events and decisions made will be paralleled to the GPM paradigm with the six main points as described in the theory.

The need to inject sea water arose after the first stages as described in the case study passed. According to Kushida government officials and political leaders began voicing the necessity of injecting the water at 6:00 p.m., the day after the earthquake, on March 12. It would according to these officials have one very positive outcome, namely, the cooling of the reactors and the fuel pool. However, the use of sea water might have negative consequences too. It would ruin the reactors because of the salt in the sea water and it would produce vast amounts of contaminated water which would be hard to contain (Kushida, 2016). TEPCO experienced many difficulties with cooling the reactors, as is described in the case study, because of the lack of electricity. However, they were averse to injecting sea water into the reactors since this would ruin them. Still, after the first hydrogen explosion occurred in reactor one TEPCO plant workers started the injection of sea water in this specific reactor (Holt et al., 2012). A day later, on March 13, sea water injection started in reactor 3. On the 14th of March, seawater injection started in reactor 2 (Holt et al., 2012).

When looking at the decisions made by the government or TEPCO plant workers it is crucial to consider the chain of decision making by TEPCO leadership too. TEPCO leadership was in the first instance not very positive towards injecting seawater because of the earlier mentioned disadvantages, the plant would become unusable in the future and vast amounts of contaminated water would be created. Therefore, the government had to issue an order to TEPCO to start injecting seawater. They did so at 8:00 p.m. on 12 March. However, Yoshida, the Fukushima Daiichi Plant Manager already started injecting seawater at 7:00 p.m. (Kushida, 2016).

As one can already see different interests were at play and the outcome of the eventual decision can well be a political resultant. Therefore, it is crucial to examine the chain of decisions through the GPM paradigm. The first factor of this paradigm concerns decisions as a result of bargaining, this can clearly be seen in the decision to inject seawater. TEPCO leadership initially was not a proponent of this method, however, after government officials ordered them to execute the injection they had no choice. Second, according to the theory, it is important to identify the players of the ‘game’ and their goals. In this instance these divisions are easily identifiable, three different players can be pointed out. The different players are the government, TEPCO leadership and Yoshida, the plant manager. The Government has as a goal to keep their citizens safe during the crisis, TEPCO wanted to maintain the reactor as long as possible, whereas, Yoshida wanted to contain the crisis. This shows there were conflicting goals in that sense.

To further apply the GPM to the decision to inject seawater one can review the comprehensive ‘general proposition’. In this part miscommunication is a very relevant factor. Miscommunication was certainly a big issue in the decision to inject seawater. As said before Yoshida, already started injecting seawater before he received approval from his chiefs. One might even wonder whether or not there was a misunderstanding of the crisis by TEPCO leadership because of the fact that they hesitated to inject seawater necessary to cool the reactors. It can be argued that this hesitation constitutes a great deal of misunderstanding of the crisis since there was no plant to be saved anymore at the time the decision was made.

The fifth and sixth aspect of the GPM paradigm are less relevant to the decisions made. This is because ‘specific proposition’ refers to the use of force, which was not an option in dealing with the Fukushima crisis. The Japanese Self-Defence forces were dispatched to the plant; however, this was to provide electricity (Kushida, 2016). Furthermore, the sixth aspect, evidence is not as important in this case since many scholars, researchers and investigators have written to a great extent about what happened during the Fukushima crisis, more than sufficient information is available.

The political and bargaining game in the decision to inject seawater into the reactors is clearly visible. The different actors in the game had different goals, however, eventually the government won this game and the decision to inject seawater was made. Even before that the plant manager already to inject seawater because the situation was too dire.

Conclusion

This essay reviewed decision making during the Fukushima Daiichi Nuclear Power Plant disaster on the 11th of March 2011. More specifically the decision to inject seawater into the reactors to cool them was scrutinized. This was done by using the Governmental Politics Model. The decision to inject seawater into the reactors was a result of a bargaining game and different actors with different objectives played the decision-making ‘game’.

2019-3-18-1552918037

Tackling misinformation on social media: college essay help online

As the world of social media expands, the ratio of miscommunication rises as more organisations hop on the bandwagon of utilising the digital realm to their advantage. Twitter, Facebook, Instagram, online forums and other websites become the pinnacle of news gathering for many individuals. Information becomes easily accessible to all walks of life meaning that people are becoming more integrated about real life issues. Consumers absorb and take information in as easy as ever before which proves to be equally advantageous and disadvantageous. But, There is an evident boundary in which the differentiation of misleading and truthful information is hard to cross without research on the topic. The accuracy of public information is highly questionable which could easily lead to problems. Despite there being a debate about source credibility in any platform, there are ways to tackle the issue through “expertise/competence (i. e., the degree to which a perceiver believes a sender to know the truth), trustworthiness (i. e., the degree to which a perceiver believes a sender will tell the truth as he or she knows it), and goodwill”. (Cronkhite & Liska (1976)) Which is why it has become critical for this to be accurate, ethical and reliable for the consumers. Verifying information is important regardless of the type of social media outlet. This essay will be highlighting the importance of why information need to fit this criteria.

By putting out credible information it prevents and reduces misconception, convoluted meanings and inconsistent facts which reduce the likeliness of issues surfacing. This in turn saves time for the consumer and the producer. The presence of risk raises the issue of how much of this information should be consumed by the public. The perception of source credibility becomes an important concept to analyse within social media, especially in terms of crisis where rationality reduces and the latter often just take the first thing that is seen. With the increasing amount of information available through newer channels, the idea of releasing information from professionals of the topic devolve away from the producers and onto consumers. (Haas & Wearden, 2003) Many of the public is unaware that this information is prone to bias and selective information sharing which could communicate the actual facts much differently. One such example is the incident of Tokyo Electric Power Co.’s Fukushima No.1 nuclear power plant in 2011, where the plant experienced triple meltdowns. There is a misconception floating around that the food exported from Fukushima is too contaminated with radioactive substances making them unhealthy and unfit to eat. But the truth is that this isn’t the case when strict screening reveals that the contamination is below the government standard to pose a threat. (arkansa.gov.au) Since then, products shipped from Fukushima have dropped considerably in prices and have not recovered since 2011 forcing retailers into bankruptcy. (japantimes.co.jp) But thanks to the use of social media and organisations releasing information out into the public, Fukushima was able to raise funds and receive help from other countries. For example the U.S. sending $100,000 and China sending emergency supplies as assistance. (theguardian.com) This would have been impossible to achieve without the use of sharing credible, reliable and ethical information regarding the country and social media support spotlighting the incident.

Accurate, ethical and reliable information open the pathway for producers to secure a relationship with the consumers which can be used to strengthen their own businesses and expand their industries further whilst gaining support from the public. The idea is to have a healthy relationship without the air of uneasiness where monetary gains and social earnings increase. Social media playing a pivotal role in deciding the route the relationship falls in. But, When done incorrectly, organisations can become unsuccessful when they know little to nothing about the change of dynamics in consumers and behaviour in the digital landscape. Consumer informedness means that consumers are well informed about products or services available with precision influencing their willingness in decisions. This increase in consumer informedness can instigate change in consumer behaviour. (uni-osnabrueck.de) In the absence of accurate, ethical and reliable information, people and organisations will make terrible decisions with no hesitation. Which leads to losses and steps backwards. As Saul Eslake (Saul-Eslake.com) says, “they will be unable to help or persuade others to make better decisions; and no-one will be able to ascertain whether the decisions made by particular individuals or organisations were the best ones that could have been made at the time”. Recently, a YouTuber named Shawn Dawson made a video that sparked controversy to the company ‘Chuck E. Cheese’ for their pizzas slices that do not look like they belong to the whole pizza. He created a theory that part of the pizzas may have been reheated or recycled from other tables. In response Chuck E. Cheese responded in multiple media outlets to debunk the theory, “These claims are unequivocally false. We prep the dough daily for our made to order pizzas, which means they’re not always perfectly round, but they are still great tasting.” (https://twitter.com/chuckecheeses) It is worth bringing up that no information other than pictures back up the claim that they reused the pizza. The food company has also gone far to create a video showing the pizza preparation. To back as the support, ex-employees spoke up and shared their own side of the story to debunk the theory further. It’s these quick responses that saved what could have caused a small downfall in sale for the Chuck E. Cheese company. (washintonpost.com) This event highlights the importance on the release of information that can fall in favour to whoever utilises it correctly and the effectiveness of credible information that should be taken to heart. Credible information is good and bad especially when it has the support of others whether online or real life. The assumption or guess when there is no information available to base from is called a ‘heuristic value’ which is seen associated with information that has no credibility.

Mass media have been a dominant source of finding information (Murch, 1971). They are generally thought and assumed to provide credible, valuable, and ethical information open to the public (Heath, Liao, & Douglas, 1995). However, along with traditional forms of media, newer media are increasingly available for information seeking and reports. According to PNAS (www.pnas.org), “The emergence of social media as a key source of news content has created a new ecosystem for the spreading of misinformation. This is illustrated by the recent rise of an old form of misinformation: blatantly false news stories that are presented as if they are legitimate . So-called “fake news” rose to prominence as a major issue during the 2016 US presidential election and continues to draw significant attention.” This affects how we as social beings perceive and analyse information we see online compared to real life. Beyond just reducing the intervention’s effectiveness, failing to deduce stories from real to false increase the belief of false content. Leading to biased and misleading content that fool the audience. One such incident is Michael Jackson’s death in June 2009 where he died from acute propofol and benzodiazepine intoxication administered by his doctor, Dr. Murray. (nytimes.com) It was deduced from the public that Michael Jackson was murdered on purpose but the court convicted, Dr. Murray of involuntary murder as the doctor proclaimed that Jackson begged him to give more. A fact that was overlooked by the general due to bias. This underlines how information is selectively picked from the public and not all information is revealed to sway the audience. A study conducted online by Jason and his team (JCMC [CQU]) revealed that Facebook users tended to believe their friends almost instantly even without a link or proper citation to a website to backup their claim. “Using a person who has frequent social media interactions with the participant was intended to increase the external validity of the manipulation.” Meaning information online that can be taken as truth or not is left to the perception of the viewer linking to the idea that information online isn’t credible fully unless it came straight from the source. Proclaiming the importance of credible information to be released.

Information has the power to inform, explain and expand on topics and concepts. But it also has the power to create inaccuracies and confusion which is hurtful to the public and damages the reputation of companies. The goal is to move forward not backwards. Many companies have gotten themselves into disputes because of incorrect information which could have easily been avoided through releasing accurate, ethical and reliable information from the beginning. False Information can start disputes and true information can provide resolution. The public has become less attentive to mainstream news altogether which strikes a problem on what can be trusted. Companies and organisations need their information to be accurate and reliable as much as possible to defeat and reduce this issue. Increased negativity and incivility exacerbate the media’s credibility problem. “People of all political persuasions are growing more dissatisfied with the news, as levels of media trust decline.” (JCMC [CQU]) In 2010, Dannon’s ‘Activia Yogurt’ released an online statement and false advertisement that their yogurt had “special bacterial ingredients.” A consumer named, Trish Wiener lodged a complaint against Dannon. The yogurts were being marketed as being “clinically” and “scientifically” proven to boost the immune system while able to help to regulate digestion. However, the judge saw this statement as unproven. As well as many other products in their line that used this statement in their products. “This landed the company a $45 million class action settlement.” (businessinsider.com) it didn’t help that Dannon’s prices for their yogurt was inflated compared to other yogurts in the market. “The lawsuit claims Dannon has spent “far more than $100 million” to convey deceptive messages to U.S. consumers while charging 30 percent more that other yogurt products.” (reuters.com) This highlights how inaccurate information can cost millions of dollars to settle and resolve. However it also showed how the public can easily evict irresponsible producers from their actions and give leeway to justice.

2019-5-2-1556794982

Socio-political significance of Turkey’s emergent neo-Ottoman cultural phenomenon

Over the last decade, Turkey’s cultural sphere has witnessed a motto of Ottomania—a term describing the recent cultural fervor for everything Ottoman. Although this neo-Ottoman cultural phenomenon, is not entirely new since it had its previous cycle back in the 1980s and 1990s during the heyday of Turkey’s political Islam, it now has a rather novel characteristic and distinct pattern of operation. This revived Ottoman craze is discernable in what I call the neo-Ottoman cultural ensemble—referring to a growing array of Ottoman-themed cultural productions and sites that evoke Turkey’s Ottoman-Islamic cultural heritage. For example, the celebration of the 1453 Istanbul conquest no longer merely takes place as an annual public commemoration by the Islamists,[1] but has been widely promulgated, reproduced, and consumed into various forms of popular culture such as: the Panorama 1453 History Museum; a fun ride called the Conqueror’s Dream (Fatih’in Rüyası) at the Vialand theme park; the highly publicized and grossed blockbuster The Conquest 1453 (Fetih 1453); and the primetime television costume drama The Conqueror (Fatih). It is the “banal”, or “mundane,” ways of everyday practice of society itself, rather than the government or state institutions that distinguishes this emergent form of neo-Ottomanism from its earlier phases.[2]

This is the context in which the concept of neo-Ottomanism has acquired its cultural dimension and analytical currency for comprehending the proliferating neo-Ottoman cultural phenomenon. However, when the concept is employed in contemporary cultural debates, it generally follows two trajectories that are common in the literature of Turkish domestic and foreign politics. These trajectories conceptualize neo-Ottomanism as an Islamist political ideology and/or a doctrine of Turkey’s foreign policy in the post-Cold War era. This essay argues that these two conventional conceptions tend to overlook the complexity and hybridity of Turkey’s latest phase of neo-Ottomanism. As a result, they tend to understand the emergent neo-Ottoman cultural ensemble as merely a representational apparatus of the neoconservative Justice and Development Party’s (AKP; Adalet ve Kalkınma Partisi) ideology and diplomatic strategy.

This essay hence aims to reassess the analytical concept of neo-Ottomanism and the emergent neo-Ottoman cultural ensemble by undertaking three tasks. First, through a brief critique of the concept of neo-Ottomanism, I will discuss its common trajectories and limitations for comprehending the latest phase of neo-Ottoman cultural phenomenon. My second task is to propose a conceptual move from neo-Ottomanism to Ottomentality by incorporating the Foucauldian perspective of governmentality. Ottomentality is an alternative concept that I deployed here to underscore the overlapping relationship between neoliberal and neo-Ottoman rationalities in the AKP’s government of culture and diversity. I contend that neoliberalism and neo-Ottomanism are inseparable governing rationalities of the AKP and their convergence has engendered new modes of governing the cultural field as well as regulating inter-ethnic and inter-religious relations in Turkey. And finally, I will reassess the neo-Ottoman cultural ensemble through the analytical lens of Ottomentality. I contend that the convergence of neoliberal and neo-Ottoman rationalities has significantly transformed the relationships of state, culture, and the social. As the cases of the television historical drama Magnificent Century (Muhteşem Yüzyıl) and the film The Conquest 1453 (Fetih 1453) shall illustrate, the neo-Ottoman cultural ensemble plays a significant role as a governing technique that constitutes a new regime of truth based on market mentality and religious truth. It also produces a new subject of citizenry, who is responsible for enacting its right to freedom through participation in the culture market, complying with religious norms and traditional values, and maintaining a difference-blind and discriminatory model of multiculturalism.

A critique of neo-Ottomanism as an analytical concept

Although the concept of neo-Ottomanism has been commonly used in Turkish Studies, it has become a loose term referring to anything associated with the Islamist political ideology, nostalgia for the Ottoman past, and imperialist ambition of reasserting Turkey’s economic and political influence within the region and beyond. Some scholars have recently indicated that the concept of neo-Ottomanism is running out of steam as it lacks meaningful definition and explanatory power in studies of Turkish politics and foreign policy.[3] The concept’s ambiguity and impotent analytical and explanatory value is mainly due to the divergent, competing interpretations and a lack of critical evaluation within the literature.[4] Nonetheless, despite the concept being equivocally defined, it is most commonly understood in two identifiable trajectories. First, it is conceptualized as an Islamist ideology, responding to the secularist notions of modernity and nationhood and aiming to reconstruct Turkish identity by evoking Ottoman-Islamic heritage as an essential component of Turkish culture. Although neo-Ottomanism was initially formulated by a collaborated group of secular, liberal, and conservative intellectuals and political actors in the 1980s, it is closely linked to the consolidated socio-economic and political power of conservative middle-class. This trajectory considers neo-Ottomanism as primarily a form of identity politics and a result of political struggle in opposition to the republic’s founding ideology of Kemalism. Second, it is understood as an established foreign policy framework reflecting the AKP government’s renewed diplomatic strategy in the Balkans, Central Asia, and Middle East wherein Turkey plays an active role. This trajectory regards neo-Ottomanism as a political doctrine (often referring to Ahmet Davutoglu’s Strategic Depth serving as the guidebook for Turkey’s diplomatic strategy in the 21st century), which sees Turkey as a “legitimate heir of the Ottoman Empire”[5] and seeks to reaffirm Turkey’s position in the changing world order in the post-Cold War era.[6]

As a result of a lack of critical evaluation of the conventional conceptions of neo-Ottomanism, contemporary cultural analyses have largely followed the “ideology” and “foreign policy” trajectories as explanatory guidance when assessing the emergent neo-Ottoman cultural phenomenon. I contend that the neo-Ottoman cultural phenomenon is more complex than what these two trajectories offer to explain. Analyses that adopt these two approaches tend to run a few risks. First, they tend to perceiveneo-Ottomanism as a monolithic imposition upon society. They presume that this ideology, when inscribed onto domestic and foreign policies, somehow has a direct impact on how society renews its national interest and identity.[7] And they tend to understand the neo-Ottoman cultural ensemble as merely a representational device of the neo-Ottomanist ideology. For instance, Şeyda Barlas Bozkuş, in her analyses of the Miniatürk theme park and the 1453 Panorama History Museum, argues that these two sites represent the AKP’s “ideological emphasis on neo-Ottomanism” and “[create] a new class of citizens with a new relationship to Turkish-Ottoman national identity.”[8] Second, contemporary cultural debates tend to overlook the complex and hybrid nature of the latest phase of neo-Ottomanism, which rarely operates on its own, but more often relies on and converges with other political rationalities, projects, and programs. As this essay shall illustrate, when closely examined, current configuration of neo-Ottomanism is more likely to reveal internal inconsistencies as well as a combination of multiple and intersecting political forces.

Moreover, as a consequence of the two risks mentioned above, contemporary cultural debates may have overlooked some of the symptomatic clues, hence, underestimated the socio-political significance of the latest phase of neo-Ottomanism. A major symptomatic clue that is often missed in cultural debates on the subject is culture itself. Insufficient attention has been paid to the AKP’s rationale of reconceptualizing culture as an administrative matter—a matter that concerns how culture is to be perceived and managed, by what culture the social should be governed, and how individuals might govern themselves with culture. At the core of the AKP government’s politics of culture and neoliberal reform of the cultural filed is the question of the social.[9] Its reform policies, projects, and programs are a means of constituting a social reality and directing social actions. When culture is aligned with neoliberal governing rationality, it redefines a new administrative culture and new rules and responsibilities of citizens in cultural practices. Culture has become not only a means to advance Turkey in global competition,[10] but also a technology of managing the diversifying culture resulted in the process of globalization. As Brian Silverstein notes, “[culture] is among other things and increasingly to be seen as a major target of administration and government in a liberalizing polity, and less a phenomenon in its ownright.”[11] While many studies acknowledge the AKP government’s neoliberal reform of the cultural field, they tend to regard neo-Ottomanism as primarily an Islamist political agenda operating outside of the neoliberal reform. It is my conviction that neoliberalism and neo-Ottomanism are inseparable political processes and rationalities, which have merged and engendered new modalities of governing every aspect of cultural life in society, including minority cultural rights, freedom of expression, individuals’ lifestyle, and so on. Hence, by overlooking the “centrality of culture”[12] in relation to the question of the social, contemporary cultural debates tend to oversimplify the emergent neo-Ottoman cultural ensemble as nothing more than an ideological machinery of the neoconservative elite.

From neo-Ottomanism to Ottomentality

In order to more adequately assess the socio-political significance of Turkey’s emergent neo-Ottoman cultural phenomenon, I propose a conceptual shift from neo-Ottomanism to Ottomentality. This shift involves not only rethinking neo-Ottomanism as a form of governmentality, but also thinking neoliberal and neo-Ottoman rationalities in collaborative terms. Neo-Ottomanism is understood here as Turkey’s current form of neoconservatism, a prevalent political rationality that its governmental practices are not solely based on Islamic values, but also draws from and produces a new political culture that considers Ottoman-Islamic toleration and pluralism as the foundation of modern liberal multiculturalism in Turkey. Neoliberalism, in the same vein, far from a totalizing concept describing an established set of political ideology or economic policy, is conceived here as a historically and locally specific form of governmentality that must be analyzed by taking into account the multiple political forces which gave its unique shape in Turkey.[13] My claim is that when these two rationalities merge at the cultural domain, they engender a new art of government, which I call the government of culture and diversity.

This approach is therefore less concerned with a particular political ideology or the question of “how to govern,” but more about the “different styles of thought, their conditions of formation, the principles and knowledges that they borrow from and generate, the practices they consist of, how they are carried out, their contestations and alliances with other arts of governing.”[14] In light of this view, and for a practical purpose, Ottomentality is an alternative concept that I attempt to develop here to avoid the ambiguous meanings and analytical limitations of neo-Ottomanism. This concept underscores to the convergence of neoliberal and neo-Ottoman rationalities as well as the interrelated discourses, projects, policies, and strategies that are developed around them for regulating cultural activities and directing inter-ethnic and inter-religious relations in Turkey. It pays attention to the techniques and practices that have significant effects on the relationships of state, culture, and the social. It is concerned with the production of knowledge, or truth, based on which a new social reality of ‘freedom,’ ‘tolerance,’ and ‘multiculturalism’ in Turkey is constituted. Furthermore, it helps to identify the type of political subject, whose demand for cultural rights and participatory democracy is reduced to market terms and a narrow understanding of multiculturalism. And their criticism of this new social reality is increasingly subjected to judicial exclusion and discipline.

I shall note that Ottomentality is an authoritarian type of governmentality—a specific type of illiberal rule operated within the structure of modern liberal democracy. As Mitchell Dean notes, although the literature on governmentality has focused mainly on liberal democratic rules that are practiced through the individual subjects’ active role (as citizens) and exercise of freedom, there are also “non-liberal and explicitly authoritarian types of rule that seek to operate through obedient rather than free subjects, or, at a minimum, endeavor to neutralize any opposition to authority.”[15] He suggests that a useful way to approach to this type of governmentality would be to identify the practices and rationalities which “divide” or “exclude” those who are subjected to be governed.[16] According to Foucault’s notion of “dividing practices,” “[t]he subject is either divided inside himself or divided from others. This process objectivizes him. Examples are the mad and the sane, the sick and the healthy, the criminals and the ‘good boys’.”[17] Turkey’s growing neo-Ottoman cultural ensemble can be considered as such exclusionary practices, which seek to regulate the diversifying culture by dividing the subjects into categorical, if not polarized, segments based on their cultural differences. For instance, mundane practices such as going to the museums and watching television shows may produce subject positions which divide subjects into such categories as the pious and the secular, the moral and the degenerate, and the Sunni-Muslim-Turk and the ethno-religious minorities.

Reassessing the neo-Ottoman cultural ensemble through the lens of Ottomentality

In this final section, I propose a reassessment of the emergent neo-Ottoman cultural ensemble by looking beyond the conventional conceptions of neo-Ottomanism as “ideology” and “foreign policy.” Using the analytical concept of Ottomentality, I aim to examine the state’s changing role and governing rationality in culture, the discursive processes of knowledge production for rationalizing certain practices of government, and the techniques of constituting a particular type of citizenry who acts upon themselves in accordance with the established knowledge/truth. Nonetheless, before proceeding to an analysis of the government of culture and diversity, a brief overview of the larger context in which the AKP’s Ottomentality took shape would be helpful.

Context

Since the establishment of the Turkish republic, the state has played a major role in maintaining a homogeneous national identity by suppressing public claims of ethnic and religious differences through militaristic intervention. The state’s strict control of cultural life in society, in particular its assertive secularist approach to religion and ethnic conception of Turkish citizenship, has resulted in unsettling tensions between ethno-religious groups in the 1980s and 1990s, i.e. the Kurdish question and the 1997 “soft coup.” These social tensions indicated the limits of state-led modernization and secularization projects in accommodating ethnic and pious segments of society.[18] This was also a time when Turkey began to witness the declining authority of the founding ideology of Kemalism as an effect of economic and political liberalization. When the AKP came to power in 2002, one of the most urgent political questions was thus the “the limits of what the state can—or ought for its own good—reasonably demand of citizens […] to continue to make everyone internalize an ethnic conception of Turkishness.”[19] At this political juncture, it was clear that a more inclusive socio-political framework was necessary in order to mitigate the growing tension resulted in identity claims.

Apart from domestic affairs, a few vital transnational initiatives also took part in the AKP’s formulation of neoliberal and neo-Ottoman rationalities. First, in the aftermath of the attacks in New York on September 11 (9/11) in 2001, the Middle East and Muslim communities around the world became the target ofintensified political debates. In the midst of anti-Muslim and anti-terror propaganda, Turkey felt a need to rebuild its image by aligning with the United Nations’ (UN) resolution of “The Alliance of Civilizations,” which called for cross-cultural dialogue between countries through cultural exchange programs and transnational business partnership.[20] Turkey took on the leading role in this resolution and launched extensive developmental plans that were designated to rebuild Turkey’s image as a civilization of tolerance and peaceful co-existence.[21] The Ottoman-Islamic civilization, known for its legacy of cosmopolitanism and ethno-religious toleration, hence became an ideal trademark of Turkey for the project of “alliance of civilizations.”[22]

Second, Turkey’s accelerated EU negotiation between the late 1990s and mid 2000s provided a timely opportunity for the newly elected AKP government to launch “liberal-democratic reform,”[23] which would significantly transform the way culture was to be administered. Culture, among the prioritized areas of administrative reform, was now reorganized to comply with the EU integration plan. By incorporating the EU’s aspect of culture as a way of enhancing “freedom, democracy, solidarity and respect for diversity,”[24] the AKP-led national cultural policy would shift away from the state-centered, protectionist model of the Kemalist establishment towards one that highlights “principles of mutual tolerance, cultural variety, equality and opposition to discrimination.”[25]

Finally, the selection of Istanbul as 2010 European Capital of Culture (ECoC) is particularly worth noting as this event enabled local authorities to put into practice the neoliberal and neo-Ottoman governing rationalities through extensive urbanprojects and branding techniques. By sponsoring and showcasing different European cities each year, the ECoC program aims at promoting a multicultural European identity beyond national borders.[26] The 2010 Istanbul ECoC was an important opportunity for Turkey not only to promote its EU candidacy, but also for the local governments to pursue urban developmental projects.[27] Some of the newly formed Ottoman-themed cultural sites and productions were a part of the ECoC projects for branding Istanbul as cultural hub where the East and West meet. It is in this context that the interplay between the neoliberal and neo-Ottoman rationalities can be vividly observed in the form of neo-Ottoman cultural ensemble.

Strong state, culture, and the social

Given the contextual background mentioned above, one could argue that the AKP’s neoliberal and neo-Ottoman rationalities arose as critiques of the republican state’s excessive intervention in society’s cultural life. The transnational initiatives that required Turkey to adopt a liberal democratic paradigm have therefore given way to the formulation and convergence of these two forms of governmentalities that would significantly challenge the state-centered approach to culture as a means of governing the social. However, it would be inaccurate to claim that the AKP’s prioritization of private initiatives in cultural governance has effectively decentralized or democratized the cultural domain from the state’s authoritarian intervention and narrow definition of Turkish culture. Deregulation of culture entails sophisticated legislations concerning the roles of the state and civil society in cultural governance. Hence, for instance, the law of promotion of culture, the law of media censorship, and the new national cultural policy prepared by the Ministry of Culture and Tourism explicitly indicate not only a new vision of national culture, but also the roles of the state and civil society in promoting and preserving national culture. It shall be noted that culture as a governing technology is not an invention of the AKP government. Culture has always been a major area of administrative concern throughout the history of the Turkish republic. As Murat Katoğlu illustrates, during the early republic, culture was conceptualized as part of the state-led “public service” aimed to inform and educate the citizens.[28] Arts and culture were essential means for modernizing the nation; for instance,the state-run cultural institutions, i.e. state ballet, theater, museum, radio and television, “[indicate] the type of modern life style that the government was trying to advocate.”[29] Nonetheless, the role of the state, the status of culture, and the techniques of managing it have been transformed as Turkey undergoes neoliberal reform. In addition, Aksoy suggests that what distinguishes the AKP’s neoliberal mode of cultural governance from that of the early republic modernization project is that market mentality has become the administrative norm.[30] Culture now is reconceptualized as an asset for advancing Turkey in global competition and a site for exercising individual freedom rather than a mechanism of social engineering. And Turkey’s heritage of Ottoman-Islamic civilization in particular is utilized as a nation branding technique to enhance Turkey’s economy, rather than a corrupt past to be forgotten. To achieve the aim of efficient, hence good, governance, the AKP’s cultural governance has heavily relied on privatization as a means to limit state intervention. Thus, privatization has not only transformed culture into an integral part of the free market, but also redefined the state’s role as a facilitator of the culture market, rather than the main provider of cultural service to the public.

The state’s withdrawal from cultural service and prioritization of the civil society to take on the initiatives of preserving and promoting Turkish “cultural values and traditional arts”[31] lead to an immediate effect of the declining authority of the Kemalist cultural establishment. Since many of the previously state-run cultural institutions now are managed with corporate mentality, they begin to lose their status as state-centered institutions and significance in defining and maintaining a homogeneous Turkish culture that they once did. Instead, these institutions, together with other newly formed cultural sites and productions by private initiatives, are converted into a market place or cultural commodities in competition with each other. Hence, privatization of culture leads to the following consequences: First, it weakens and hollows out the 20th century notion of modern secular nation state, which sets a clear boundary confining religion within the private sphere. Second, it gives way to the neoconservative force, who “models state authority on [religious] authority, a pastoral relation of the state to its flock, and a concern with unified rather than balanced or checked state power.”[32] Finally, it converts social issues that are resulted from political actions into market terms and a sheer matter of culture, which is now left to personal choice.[33] As a result, far from a declining state, Ottomentality has constituted a strong state. In particular, neoliberal governance of the cultural field has enabled the ruling neoconservative government to mobilize a new set of political truth and norms for directing inter-ethnic and inter-religious relations in society.

New regime of truth

Central to Foucault’s notion of governmentality is “truth games”[34]—referring to the activities of knowledge production through which particular thoughts are rendered truthful and practices of government are made reasonable.[35] What Foucault calls the “regime of truth” is not concerned about facticity, but a coherent set of practices that connect different discourses and make sense of the political rationalities marking the “division between true and false.”[36] The neo-Ottoman cultural ensemble is a compelling case through which the AKP’s investment of thought, knowledge production, and truth telling can be observed. Two cases are particularly worth mentioning here as I work through the politics of truth in the AKP’s neoliberal governance of culture and neo-Ottoman management of diversity.

Between 2011 and 2014, the Turkish television historical drama Magnificent Century (Muhteşem Yüzyıl, Muhteşem hereafter), featuring the life of the Ottoman Sultan Süleyman, who is known for his legislative establishment in the 16th century Ottoman Empire, attracted wide viewership in Turkey and abroad, especially in the Balkans and Middle East. Although the show played a significant role in generating international interests in Turkey’s tourism, culinary, Ottoman-Islamicarts and history, etc. (which are the fundamental aims of the AKP-led national cultural policy to promote Turkey through arts and culture, including media export),[37] it received harsh criticism among some Ottoman(ist) historians and warning from the RTUK (Radio and Television Supreme Council, a key institution of media censorship and regulation in Turkey). The criticism included the show’s misrepresentation of the Sultan as a hedonist and its harm to moral and traditional values of society. Oktay Saral, an AKP deputy of Istanbul at the time, petitioned to the parliament for a law to ban the show. He said, “[The] law would […] show filmmakers [media practitioners] how to conduct their work in compliance with Turkish family structure and moral values without humiliating Turkish youth and children.”[38] Recep Tayyip Erdoğan (Prime Minister then) also stated, “[those] who toy with these [traditional] values would be taught a lesson within the premises of law.”[39] After his statement, the show was removed from in-flight-channels of national flag carrier Turkish Airlines.

Another popular media production, the 2012 blockbuster The Conquest 1453 (Fetih 1453, Fetih hereafter), which was acclaimed for its success in domestic and international box offices, also generated mixed receptions among Turkish and foreign audiences. Some critics in Turkey and European Christians criticized the film for its selective interpretation of the Ottoman conquest of Constantinople and offensive portrayal of the (Byzantine) Christians. The Greek weekly To Proto Thema denounced that the film served as a “conquest propaganda by the Turks” and “[failed] to show the mass killings of Greeks and the plunder of the land by the Turks.”[40] A Turkish critic also commented that the film portrays the “extreme patriotism” in Turkey “without any hint of […] tolerance sprinkled throughout [the film].”[41] Furthermore, a German Christian association campaigned to boycott the film. Meanwhile, the AKP officials on the contrary praised the film for its genuine representation of the conquest. As Bülent Arınç (Deputy Prime Minister then) stated, “This is truly the best film ever made in the past years.”[42] He also responded to the questions regarding the film’s historical accuracy, “This is a film, not a documentary. The film in general fairly represents all the events that occurred during the conquest as the way we know it.”[43]

When Muhteşem and Fetih are examined within the larger context in which the neo-Ottoman cultural ensemble is formed, the connections between particular types of knowledge and governmental practice become apparent. First, the cases of Muhteşem and Fetih reveal the saturation of market rationality as the basis for a new model of cultural governance. When culture is administered in market terms, it becomes a commodity for sale and promotion as well as an indicator of a number of things for measuring the performance of cultural governance. When Turkey’s culture, in particular Ottoman-Islamic cultural heritage, is converted into an asset and national brand to advance the country in global competition, the reputation and capital it generates become indicators of Turkey’s economic development and progress. The overt emphasis on economic growth, according to Irving Kristol, is one of the distinctive features that differentiate the neoconservatives from their conservative predecessors. He suggests that, for the neoconservatives, economic growth is what gives “modern democracies their legitimacy and durability.”[44] In the Turkish context, the rising neoconservative power, which consisted of a group of Islamists and secular, liberal intellectuals and entrepreneurs (at least in the early years of the AKP’s rule), had consistently focused on boosting Turkey’s economy. For them, economic development seems to have become the appropriate way of making “conservative politics suitable to governing a modern democracy.”[45] Henceforth, such high profile cultural productions as Muhteşem and Fetih are of valuable assets that serve the primary aim of the AKP-led cultural policy because they contribute to the growth in the related areas of tourism and culture industry by promoting Turkey at international level. Based on market rationality, as long as culture can generate productivity and profit, the government is doing a splendid job in governance. In other words, when neoliberal and neoconservative forces converge at the cultural domain, both culture and good governance are reduced to and measured by economic growth, which has become a synonym for democracy “equated with the existence of formal rights, especially private property rights; with the market; and with voting,” rather than political autonomy.[46]

Second, the AKP officials’ applause of Fetih on the one hand and criticism of Muhteşem on the other demonstrates their assertion of the moral-religious authority of the state. As the notion of nation state sovereignty has become weakened by the processes of economic liberalization and globalization, the boundary that separates religion and state has become blurred. As a result, religion becomes “de-privatized” and surges back into the public sphere.[47] This blurred boundary between religion and state has enabled the neoconservative AKP to establish links between religious authority and state authority as well as between religious truth and political truth.[48] These links are evident in the AKP officials’ various public statements declaring the government’s moral mission of sanitizing Turkish culture in accordance with Islamic and traditional values. For instance, as Erdoğan once reacted to his secular opponent’s comment about his interference in politics with religious views, “we [AKP] will raise a generation that is conservative and democratic and embraces the values and historical principles of its nation.”[49] According to his view, despite Muhteşem’s contribution of generating growth in industries of culture and tourism, it became subjected to censorship and legal action because its content did not comply with the governing authority’s moral mission. The controversy of Muhteşem illustrates the rise of a religion-based political truth in Turkey, which sees Islam as the main reference for directing society’s moral conduct and individual lifestyle. Henceforth, by rewarding desirable actions (i.e. with sponsorship law and tax incentives)[50] and punishing undesirable ones (i.e. through censorship, media ban, and jail term for media practitioners’ misconduct), the AKP-led reform of the cultural field constitutes a new type of political culture and truth—one that is based on moral-religious views rather than rational reasoning.

Moreover, the AKP officials’ support for Fetih reveals its endeavor in a neo-Ottomanist knowledge, which regards the 1453 Ottoman conquest of Constantinople as the foundation of modern liberal multiculturalism in Turkey. This knowledge perceives Islam as the centripetal force for enhancing social cohesion by transcending differences between faith and ethnic groups. It rejects candid and critical interpretations of history and insists on a singular view of Ottoman-Islamic pluralism and a pragmatic understanding of the relationship between religion and state.[51] It does not require historical accuracy since religious truth is cast as historical and political truth. For instance, a consistent, singular narrative of the conquest can be observed in such productions and sites as the Panorama 1453 History Museum, television series Fatih, and TRT children’s program Çınar. This narrative begins with Prophet Muhammad’s prophecy, which he received from the almighty Allah, that Constantinople would be conquered by a great Ottoman soldier. When history is narrated from a religious point of view, it becomes indisputable as it would imply challenge to religious truth, hence Allah’s will. Nonetheless, the neo-Ottomanist knowledge conceives the conquest as not only an Ottoman victory in the past, but an incontestable living truth in Turkey’s present. As Nevzat Bayhan, former general manager of Culture Inc. in association with the Istanbul Metropolitan Municipality (İBB Kültür A.Ş.), stated at the opening ceremony of Istanbul’s Panorama 1453 History Museum,

The conquest [of Istanbul] is not about taking over the city… but to make the city livable… and its populace happy. Today, Istanbul continues to present to the world as a place where Armenians, Syriacs, Kurds… Muslims, Jews, and Christians peacefully live together.[52]

Bayhan’s statement illustrates the significance of the 1453 conquest in the neo-Ottomanist knowledge because it marks the foundation of a culture of tolerance, diversity, and peaceful coexistence in Turkey. While the neo-Ottomanist knowledge may conveniently serve the branding purpose in the post-9/11 and ECoC contexts, I maintain that it more significantly rationalizes the governmental practices in reshaping the cultural conduct and multicultural relations in Turkey. The knowledge also produces a political norm of indifference—one that is reluctant to recognize ethno-religious differences among populace, uncritical of the limits of Islam-based toleration and multiculturalism, and more seriously, indifferent about state-sanctioned discrimination and violence against the ethno-religious minorities.

Ottomentality and its subject

The AKP’s practices of the government of culture and diversity constitute what Foucault calls the “technologies of the self—ways in which human beings come to understand and act upon themselves within certain regimes of authority and knowledge, and by means of certain techniques directed to self-improvement.”[53] The AKP’s neoliberal and neo-Ottoman rationalities share a similar aim as they both seek to produce a new set of ethnical code of social conduct and transform Turkish society into a particular kind, which is economically liberal and culturally conservative. They deploy different means to direct the governed in certain ways as to achieve the desired outcome. According to Foucault, the neoliberal style of government is based on the premise that “individuals should conduct their lives as an enterprise [and] should become entrepreneurs of themselves.”[54] Central to this style of government is the production of freedom—referring to the practices that are employed to produce the necessary condition for the individuals to be free and take on responsibility of caring for themselves. For instance, Nikolas Rose suggests that consumption, a form of governing technology, is often deployed to provide the individuals with a variety of choice for exercising freedom and self-improvement. As such, the subject citizens are now “active,” or “consumer” citizens, who understand their relationship with the others and conduct their life based on market mentality.[55] Unlike the republican citizens, whose rights, duties, and obligations areprimarily bond to the state, citizens as consumers “[are] to enact [their] democratic obligations as a form of consumption”[56] in the private sphere of the market.

The AKP’s neoliberal governance of culture hence has invested in liberalizing the cultural field by transforming it into a marketplace in order to create such a condition wherein citizens can enact their right to freedom and act upon themselves as a form of investment. The proliferation of the neo-Ottoman cultural ensemble in this regard can be understood as a new technology of the self as it creates a whole new field for the consumer citizens to exercise their freedom of choice (of identity, taste, and lifestyle) by providing them a variety of trendy Ottoman-themed cultural products, ranging from fashion to entertainment. This ensemble also constitutes a whole new imagery of the Ottoman legacy with which the consumer citizens may identify. Therefore, through participation within the cultural field, as artists, media practitioners, intellectuals, sponsors, or consumers, citizens are encouraged to think of themselves as free agents and their actions are a means for acquiring the necessary cultural capital to become cultivated and competent actors in the competitive market. This new technology of the self also has transformed the republican notion of Turkish citizenship to one that is activated upon individuals’ freedom of choice through cultural consumption at the marketplace.

Furthermore, as market mechanisms enhance the promulgation of moral-religious values, the consumer citizens are also offered a choice of identity as virtuous citizens, who should conduct their life and their relationship with the others based on Islamic traditions and values. Again, the public debate over the portrayal of the revered Sultan Süleyman as a hedonist in Muhteşem and the legal actions against the television producer, are exemplary of the disciplinary techniques for shaping individuals’ behaviors in line with conservative values. While consumer citizens exercise their freedom through cultural consumption, they are also reminded of their responsibility to preserve traditional moral value, family structure, and gender relations. Those who deviate from the norm are subjected to public condemnation and punishment.

Finally, as the neo-Ottomanist cultural ensemble reproduces and mediates a neo-Ottomanist knowledge in such commodities as the film Fetih and Panorama 1453 History Museum, consumer citizens are exposed to a new set of symbolic meanings of Ottoman-Islamic toleration, pluralism, and peaceful coexistence, albeit through a view of the Ottoman past fixated on its magnificence rather than its monstrosity.[57] This knowledge sets the ethical code for private citizens to think of themselves in relation to the other ethno-religious groups based on a hierarchical social order, which subordinates minorities to the rule of Sunni Islamic government. When this imagery of magnificence serves as the central component in nation branding, such as to align Turkey with the civilization of peace and co-existence in the post 9/11 and ECoC contexts, it encourages citizens to take pride and identify with their Ottoman-Islamic heritage. As such, Turkey’s nation branding perhaps also can be considered as a noveltechnology of the self as it requires citizens, be it business sectors, historians, or filmmakers, to take on their active role in building an image of tolerant and multicultural Turkey through arts and culture. It is in this regard that I consider the neo-Ottoman rationality as a form of “indirect rule of diversity”[58] as it produces a citizenry, who actively participates in the reproduction of neo-Ottomanist historiography and continues to remain uncritical about the “dark legacy of the Ottoman past.”[59] Consequently, Ottomentality has produced a type of subject that is constantly subjected to dividing techniques “that will divide populations and exclude certain categories from the status of the autonomous and rational person.”[60]

2016-10-5-1475705338

The Happiest Days Of Your Life'

The short-story ‘The Happiest Days of Your Life’ written by Penelope Lively (in 1978) tells a story about a young boy who quietly suffers the consequences of his parents’ neglect of taking care of him.

The story is about a young boy (probably 7-10 years old) named Charles. It starts out with Charles and his parents driving to a preparatory school in Sussex (Southeast England) to decide whether or whether not if they should send Charles to attend next term there. After arriving to the school, the boy and his parents enter the school and thereafter meet the headmaster’s wife who welcomes them. As time flies Charles becomes more anxious and in the meantime the adults are talking about the qualities of the school. When the headmaster comes, Margaret takes Charles to The Lower Third. She introduces Charles to his potential classmates and leaves him alone with them. However Charles finds himself trapped by the other students who bombard him with questions. While trying to concentrate and therefore being more and more absent-minded, one student suddenly shouts one intimidating sentence: ‘Next term we’ll mash you, we always mash new boys’. On the way home Charles parents discuss the outcome of the visit which is very positive, but Charles is completely silent and only thinks of one thing: ‘we’ll mash you’.

The story takes place in southern England in present time. When the family is heading for the preparatory school, we are told that they are passing Sussex landscape. Therefore the school must be positioned in the surrounding area.

In the beginning our characters are in the car. A family consisting of three members: A father, a mother and their son Charles. In the beginning of the story, Charles is sitting in the car with an unopened box of chocolate next to him. It can symbolize the insecurity of Charles and his dissatisfaction with the whole situation. The family is presumably a wealthy family since they can afford to send Charles to preparatory school. In the story we also are told, that they were from Finchley (a kind of middle-class area) but the mother inform the headmasters wife by telling her they are from the high-class area Hampstead, which is one of the wealthiest areas in London.

During the story, the parents seem to be more interested in what the preparatory school has to offer rather than what it is able to offer Charles. We as the readers are left with the notion that when they??re walking around the school reviewing the facilities, they actually don’t’ care that much about what Charles thinks.

The story has a 3rd person narrator, an omniscient one. But it is used in an unusual way. The focus changes through the story so we follow one character at the time and see this character emotions and then it changes to anther person. To start with the focus is on Charles. We do not hear much about his thoughts but through the descriptions of the things surrounding him we get an impression of the emotions he feels inside him.

Our main character Charles is as said before approximately 7-10 years old, as he is going to preparatory school. He is very shy and we get the impression that he speaks rarely. In fact, he does not say a single word in the story. In addition of his shyness, Charles also lacks of courage which is evident in the end of the story where he simply doesn’t dare to tell his parents exactly what he feels about this school and how he definitely doesn’t want to attend it after having heard the frightening words from one of the students: ‘Next term we’ll mash you’.

ARBOVIRAL ENCEPHALITIS

Introduction

Encephalopathy is a term used to describe general brain dysfunction due to edema and neural degeneration. This disease can result from alcohol hepatitis, an alteration in blood chemistry, particularly electrolyte imbalance, and an increase in toxic chemicals that affect the central nervous system (CNS). A category of viruses, the arboviruses, can also cause encephalitis.

The term arboviruses is defined as arthropod-borne viruses [2]. There are three families of arboviruses: the Bunyaviridae, Flaviviridae, and Togaviridae. These viruses require amplifying hosts, such as birds, and dead-end hosts, such as horses and humans. As a virus amplifies in birds, they are incidentally transmitted to horses and humans through arthropods, such as mosquitoes and ticks. In the arthropods, the virus live in a symbiotic relationship. Upon transmission to horses and humans, it is unable to cope due to immune responses and causes encephalitis. [3].

In this section, we will be focusing on the complexity of arboviral encephalitis in humans in terms of its pathophysiology, clinical manifestations, and medical management.

Pathophysiology

Initial arboviral infection in humans results in amplification of the viral genome, an RNA strand, in either the skin or the muscle. As a result of the amplification, primary viremia occurs. Viremia is the term that describes the presence of viral particles in the blood [4].

Different viruses have different mechanisms for invading the CNS. Some do so by attacking the lymphatic system, after which macrophages and monocytes initiate an immune response that is carried out by the body. Following the immune response, secondary viremia arises [4]. The virus then travels from the lymph and thoracic ducts into the blood and it may travel to the bone marrow for further replication. Following replication, it will continue circulating in the blood before reaching the CNS [2].

Other viruses enter the CNS through the olfactory pathway, the cerebral epithelial cells surrounding the blood-brain barrier (BBB), or by budding off the parenchymal cells at the BBB [2].

The BBB is responsible for protecting the CNS from foreign objects but the arboviral particles are able to infiltrate the barrier. The microglial cells of the CNS, which are responsible for recognizing the viral penetration, mediate an immune response through both pro- and anti-inflammatory cytokines. However, the immune response results in inflammation of the endothelial cells which results in vascular congestion and can eventually lead to hemorrhage, demyelination of the neurons, and apoptosis of the glial cells [4]

Clinical Manifestations

The manifestations of arboviral infection varies from one individual to the next because of age, environment, the type of arthropod that delivers the infection, and the individual’s genes and immune status. The early symptoms of arboviral infection includes those common to many other diseases: a fever and a headache. Severe manifestations include muscle pain, muscular incoordination, coma [2], and seizures [4].

Clinical tests can be performed to confirm the presence of an arbovirus. Such tests include blood samples, cerebral spinal fluid analysis, and MRI and CT scan of the brain and spine [4].

Advances in Medical Management

The immune system is responsible for fighting and destroying foreign objects, however because there are different classifications of the viruses, a single immune response cannot provide protection against all the arboviral strains. Furthermore, the number of available vaccines are very limited and are confined to the United States military and the laboratories because of the infection risk associated with testing and the high production cost [4].

Different vaccination approaches are being analyzed to determine which approach seems most effective in suppressing the arboviruses. Several approaches include subunit vaccines, chimeric recombinants, and gene deleted live mutants. [3]. Subunit vaccines do not include the entire virus but only the antigen. Live chimeric recombinants are a mixture of the genomes of two viruses that may or may not display the biological properties of both viruses. The gene deleted live mutants involve the deletion of the E2 protein, an essential protein of some arboviruses that is involved in allowing the virus to attach to endosomes to be engulfed into human cells [5]. There are advantages and disadvantages in the various vaccination techniques but regardless, these vaccines need to be tested on animal models, such as horses with encephalitis, to provide promising results before human use to treat arboviral encephalitis [4].

If proper measures are taken, it is possible to control outbreaks. Such measurements include insecticide spraying, utilizing insect repellent, and remaining indoors during the hot seasons when mosquitoes are likely to be outdoors [2].

Conclusion

Easily transmitted from amplifying hosts to dead-end host through arthropods, they are also capable of disrupting the blood-brain barrier and infiltrating the central nervous system. Once inside, they can cause hemorrhage, demyelination, and apoptosis of the glial cells, eventually leading to encephalitis.

Because they exist with various RNA genomes, it is difficult to vaccinate an individual against all species but different vaccinations are being studied and a suitable animal model is being searched for before the vaccines can be put into human practice.

in here…

Post-modernism: essay help free

It is generally believed that the current society/world is a postmodern one, one without universal moral or religious laws. Occurrences are determined by the cultural contexts of a distinct community, place or time. Individuals deal with their religious urges through the formation their own spiritual world (they do this by choosing sections of various religions that they approve of). In this sense, their own theology becomes equal to the theology of a priest. This way of thinking advocates the religious drive of individuals, but also cripples the strength of religions that profess to be concerned with truths presented from elsewhere, and which present themselves as objective realities. The two non-religious ladies’ responses illustrate this, as they claim have no religion, and yet believe in the Christian God.

Post-modernism is also quite secular, and due to globalization (procedure by which regions become united through a global network of transportation, trade and communication), this way of thinking has grown more widespread. The United States of America has controlled this operation through attempts at getting rid national barriers, and advocating the free movement of services, merchandise and capital (institutions such as the International Monetary Fund and World Trade Organization aid in achieving this). Globalization has benefited Caribbean consumers, as commodities usually found in more industrialized countries (and thus they can raise their living standards) are now available to them.

Mankind’s sex drive is essential, many religions consider it to be a vital component of humanity’s divine design. However, this role is meant to be limited to ‘the marriage relationship.’ (www.christiancourier.com) Roughly 84% of the Caribbean population are Christian (whereas only 0.07% are Islamic).

When the Europeans arrived in the Caribbean, they brought with them a myriad of religions. The British were Protestants, whereas the French, Dutch, Irish and Spanish were Roman Catholics.

Other faiths emerged within the region due to indentured servitude and slavery (for example: Hinduism and Islam). The African slaves had their own spiritual and religious practices, which were fused with customs of the Catholic faith, forming new religions (such as Vodou/Voodoo and Santeria).

Jamaica’s Centre for the Investigation of Sexual Offences & Child Abuse (CISOCA) expressed concern over an increase in instances of underage sex between young children, with 32 cases being reported in the second week of July 2012.

The progression of scientific knowledge led to the discovery of numerous ‘laws’ which described how nature operated. Thus, the idea of ‘God’ ‘ viewed in various religions as the controller of nature and ruler of humanity ‘ faded into the background, no longer required to explain reality.

In 1954, two doctors discovered a drug which could be integrated into a pill to prevent conception, and a mere twelve (12) years later, this pill was being consumed by millions of women. The likelihood of persons engaging in intimate promiscuity has increased due to the effectiveness of birth-control methods.

The invention of the television in the late 1940s saw sexual visualization being brought straight into the home. At present, sexual language, depraved situations, and nudity can be easily accessed through both television and the internet.

The children of today are more sexually aware than at any point previously, due in part to the abundance of products such as make-up for prepubescents, and sexually accented dresses. Furthermore, children have more free reign than ever. Persons nowadays are presented with vehicles as soon as they qualify for a driver’s licence. They are allowed to date at younger ages, and have few limits on what they can do and where they can go.

The rates of infidelity among men and women under the age of 45 are converging. Sociologists and psychologists have associated this development with huge changes in opportunity, especially with women moving from the home into the workplace. It has been revealed through studies that the majority of individuals engaged in affairs encountered their lovers at work. Women’s increasing financial power also causes them to be less wary of risk, since they no longer need to depend on their spouses.

Education increases one’s predisposition to infidelity. It possibly serves to indicate more liberal stances toward sexuality and permissive attitudes toward adultery. This could serve as an explanation as to why the majority of the interviewees seemed indifferent to the idea of fornication.

Attending religious services is generally discourage infidelity, seemingly because it implants within people ‘a social network that promotes accountability.’ (www.psychologytoday.com) However, it only aids those already content with their relationships. If the primary relationship is not exemplary, the dissatisfaction will negate religious values. Rates of infidelity are not affected are not affected by one’s

religious denomination.

The albumen gland

The albumen gland has an almost identical histological appearance in the families of the Basommatophora. The albumen gland is a white opaque mass, ovoid in shape mass lying dorsal and just posterior to the pericardial cavity, consists of large number of tubules separated from one another by a thin layer of connective tissue (Fig. 1).

These tubules are spherical to oval in shape. The wall of each tubule consists of large cuboidal to columnar cell each cell contains a large basal nucleus and is glandular in nature, its colour varies from cream to yellow (breeding condition), and the size of the organ is similarly dependent on the sexual state of the individual. The albumen gland is found to be considerably small in the non-breeding season.

The gland is composed of a great number of secretory follicles, which are circular when viewed in cross-section, and the whole gland is united within a fine outer bounding membrane. Each follicle possesses a minute, central duct and these unite, ultimately emptying their secretion into the central lumen of the albumen gland (Figs 1,2). About four to six glandular cells surround the follicular lumen, each with a large and conspicuous nucleus, usually basal in position.

The central lumen of the albumen gland has a complete lining epithelium formed of cuboidal, ciliated cells, with a neutral staining reaction. The cells rest on a basement membrane and beneath this a fine connective tissue complex can frequently be detected.

Histochemical analysis showed that the contents of the granules secreted by the albumen gland stained with HE, thus demonstrating the basophilic and eosinophilic character of the content. Masson’s trichrome stained sections clarified the presence of extensive collagen fibers in the albumen glands with this stain, collagen appears greenish-blue and dark brown to black cell nuclei.

In sections treated with PAS technique, the albumen gland stained a deep purple. The PAS-positive staining reactions of the albumen point towards the presence of neutral polysaccharides and moderate positive staining reactions to alcian blue at 1??0 pH suggesting the presence of acid- complex sulfated mucins (mucosubstances) and purple combined acid/neutral mucins by the Alcian Blue-PAS at 1??0 pH technique. From the ensemble of these reactions it could be stated that the secretions of the albumen gland are rich in carbohydrates.

Borderline Personality Disorder: college essay help

Due to the heterogeneous condition of BPD, the disorder most commonly co-occurs with mood disorders with a prevalence of 70 to 90% (Zanarini et al., 1989;). The weight of evidence suggests that there are core atiological features that point to BPD being a distinct disorder (Zanarini et al., 2005). BPD, as with all Cluster B personality disorders, has many symptoms that make up Axis I conditions, and are consistent with an ‘externalizing’ way of coping (Paris, 2003). However, Cluster A personality disorders use cognitive symptoms and Cluster C use internalizing symptoms (Paris, 2003).

Elaborating on Borderline Personality Disorder Symptoms

People with BPD are very sensitive to the way others treat them, and are known to exhibit a phenomenon, sometimes called splitting or black-and-white thinking, which includes a positive shift from idealizing others (intense love and affection), to negative, devaluing them (feeling anger or hate) in response to perceived kindness or threats (Linehan, 2006). They feel emotions more easily, more intensely, deeper and for longer than others, which makes them especially prone to dysphoria, or feelings of mental and emotional distress (Linehan, 2006). They have difficulty knowing their goals, as well as who they are, what they value and prefer, which causes them to feel ’empty’ or lost. Zanarini et al. (2005) describe how people with borderline are often aware of the intensity of their negative emotions but still have difficulty controlling their attention or regulating their responses. They suffer from a cycle of increased pain from the shame and guilt that follow such impulsive actions they acted on to relieve their emotional pain.

DSM-V Final Diagnosis

The signs present in Maggie’s case indicate all the criteria in the diagnosis for Borderline Personality Disorder, which requires five of the following nine criteria to be met: Frantic efforts to avoid real or imagined abandonment; unstable and intense interpersonal relationships, characterized by alternating between extremes of ideation and devaluation; identity disturbance: markedly and persistently unstable self-image or sense of self; impulsivity in at least two areas that are potentially self-damaging areas (e.g., promiscuity, communicating with strangers and possible predators online, and running away); recurrent suicidal/ dangerous threats and attempts; affective instability due to marked reactivity of mood (e.g. episodic dysphoria, irritability, or anxiety usually lasting a few hours and only rarely more than a few days); chronic feelings of emptiness; inappropriate, intense anger or difficulty controlling her anger; transient, stress related paranoid ideation or sever dissociative symptoms (APA, 2013). The disturbances in Maggie’s behavior and affect ’causes clinically significant impairment’ in social and academic functioning, including school expulsion, family turmoil, and lack of friends (APA, 2013).

Assessment instruments that measure borderline personality symptoms should be included for Maggie’s case (Zanarini, 1989)like he Diagnostic Interview for Borderline Patients (DIB-R) is a semi structured clinical interview that consist of 132 questions and observation using 329 summary statements. The test looks at the four areas of functioning associated with borderline personality disorder, including Affect ,Cognition , Impulse action patterns, and Interpersonal relationships (Zanarini et al, 1989). The Structured Clinical Interview (now SCID-II) yields decent indications of the disorder, as it uses the language of the DSM-V in 12 groups of questions corresponding to the Axis II 12 personality disorders (zaarini et al, 2005).

Diagnosis

Axis I: 315.9 Unspecified neurodevelopmental disorder; V71.09

Axis II 301.83 Borderline personality disorder; V.69.9 Problem related to lifestyle;

R41.83 borderline intellectual functioning

Axis III No diagnosis

Axis IV: Problems with education, primarysupport group, and social environment.

Axis V: GAF = 51; some danger of hurting self or others

Etiology of Borderline Personality Disorder

There appear to be multiple pathways of affective instability and distinguished risk factors associated specifically to the development of BPD over other personality and mood disorders (Zanarini & Frankenburg, 1997). Genetic influences, neurobiology, upbringing, culture, and so forth can be conceptualized as precipitating factors , as it is unknown if they are either causal or mediating influences. Zanarini and Frankenburge’s (1997) ‘tripartite model’ of Borderline personality disorder suggest that the disorder is made up of complex combinations of three factors: (1) traumatic exposure, (2) vulnerability to a ‘hyperbolic’ temperament, and (3) a triggering serious of evens that set off borderline personality symptoms.

Social and Cultural Factors

Rapid social change, societal disruption, and normlessness, as presented by the lack of social structure and useful roles, appear to be important risk factors in the development of BPD (Paris, 1997). In regard to etiology there exists an important tension between viewing culture as having a direct influence on the development or exacerbation of BDP. At the same time, sociocultural factors may also act as a protective factor against the development of certain psychopathologies (Alarcon & Leetz, 1998).For instance, in Western Cultures the value placed on self-accomplishment and independence could be conceptualized as having a significant effect on exacerbating feelings of isolation, emptiness, and other factors related to identity disturbances, seen in BPD(Alarcon & Leetz, 1998). Further, idealization and devaluation are easily fostered in cultures where authority figures are idealized without question (Alarcon & Leetz, 1998). Conversely, BPD is less common in traditional cultures’ where the value is placed on compliance and cooperation. Paris (1997) argues that in developing societies, factors that have a particular emphasis on community resources and extended family networks, with strong family ties, are protective against the development of a personality disorder.

Parenting and attachment

The earliest relationship experiences sculpt the personality, shaping how one feels about themselves, and determines the extent to which we’re able to forge trust in others. Researchers found that early separations from parents (1 to 3 months) were more frequently observed in BPD patients (Zanarini et al, 1989), as BPD patients are more likely to display angry withdrawing patterns of attachment and compulsive care-seeking patterns. At its core, borderline personality exacerbates separation -indication issues, signifying acute abandonment issues that began in the first years of life (Bateman & Fonagy, 2004). Inadequate bonding with the birth mother or a disapproving father tend to start this personality in a downward spiral; any painful deficits in nurturing care and attention throughout childhood, perpetuate and reinforce this original disturbance (Gunderson, 2006) Additional research considers that parental over-involvement is important to the disorder. Within this dynamic attachment figures are often perceived to be unavailable, uncaring or overprotective (Gunderson, 2006). Individuals from families that cohere around a rigid denial of problems or exhibit a high degree of discord appear to be most vulnerable. The intensely confusing and paradoxical behavior patterns of the borderline are justified by this rational as simply defenses that were adopted growing up, to adapt to those kinds of experiences in their childhood home (zanarini et al, 2005.

Sources of Vulnerability

Trauma

The impact of sustained trauma or abuse has been shown to have long-term effects on the individual’s neurobiological make-up (Bateman & Fonagy, 2004).While it is problematic to assume a linear link between BPD and childhood exposure to trauma, early history of trauma is markedly reported higher in individuals with BPD (Bateman & Fonagy ,2004). Zanarini and colleagues (2005) reported that the risk of developing BPD was 14 times higher in those reporting childhood sexual abuse than physical abuse, prevalent in other PDS. Not all patients with BPD have histories of trauma and only small minorities of people who experience trauma develop BPD.

Biological Factors

Genetic Influences

Although there are no clear biological markers for BPD, links between BPD and genetic predisposition have been confirmed (Torgerson et al, 2000). The best fitting model for BPD had a heritability of 0.69, one of the highest indicators that genetic differences explain the variability in liability underlying BPD. Genetic studies on personality indicate that personality traits are more or less 50% heritable, leaving the remaining 50% to external factors (Zanarini et al 1997).

Neurochemistry

A number of neuroimaging studies, found in the works of Herpertz, Dietrich, Wenning, Krings, Erberich, Willmes, & Sass (2001) on borderline personality have reported findings of deficits in the serotonergic system and reductions in regions of the brain involved in the regulation of stress responses and emotion. Repeated studies show these abnormalities to be particular causal of that particular impulsive aggression and high level of sensation-seeking behavior as seen in borderline personalities. (Bateman & Fonagy, 2004).

The hippocampus and amygdala tends to be smaller and more active in people with BPD, as it is in people with post-traumatic stress disorder (PTSD) (Chapman & Gratz, 2007).Since the amygdala generate all emotions, including negative ones, this unusually strong activity may explain the heightened sensitivity, and unusual intense displays of these emotions in others (Herpertz et. al , 2001).The prefrontal cortex is known to mediate executive functioning and regulate emotional arousal, and tends to be less active in people with BPD. The relative inactivity of the prefrontal cortex might explain the difficulties people with BPD experience in regulating their emotions and responses to stress (Chapman & Gratz, 2007). The hypothalamic-pituitary-adrenal axis (HPA axis) regulates cortisol production, which is released in response to stress. Cortisol production tends to be elevated in people with BPD, causing them to experience a greater biological stress response, explaining their greater vulnerability to irritability (Chapman & Gratz, 2007). Increased cortisol production is also associated with an increased risk of suicidal behavior. From a neurological perspective, the noradrenergic abnormalities are valuable in explaining why BPD patients tend to be highly sensitive and responsive to real and benign stimuli (Herpertz, et al, 2001)

Colonisation and the Concept of the Other in the Institution of Marriage: online essay help

PREFACE

A typical Postmodernist mode of creating new texts and identities is the rewriting of earlier works of literature, also called intertextuality. Intertextuality is a reference of a text that is mirrored and reflected in another text, and also one of the central ideas of cultural postmodern and contemporary literature. The author creates a new original work of literature with the use of another existing text the author is influenced by. The two texts stand in relation to one another in an interdependent situation in order to produce meaning. In this case, I would like to mention the relationship of the two novels, Charlotte Bront’??s Jane Eyre (1847), and Dominician author Jean Rhys’s postcolonial novel, Wide Sargasso Sea (1966), the most prominent fictional extension of Jane Eyre.

Postcolonial cultural identity have an especial importance in the Caribbean due to the region’s unique history as a habitat for very different kinds of immigrants and their varying cultures from different parts of the world. Thus links from the Caribbean literary tradition can be drawn to several distinct literary traditions, as Wide Sargasso Sea naturally links to the English literary canon through its reference to Bront’??s novel. In the case of Jane Eyre, Charlotte Bront?? creates a white, middle class female identity represented in the character of Jane, and a possible subconscious instance, a Creole identity in the form of Bertha Mason. Jean Rhys, on the other hand, rewrites Jane Eyre from the point of view of the Creole identity. In the representation of Bront??, Bertha Mason, locked away, isolated and rejected, is a symbol of the colonial Other, as Imperial England feared an psychologically ‘locked away’ the other cultures it encountered during colonisations. Rhys rewrites Bertha Mason as Antoinette Cosway, who is portrayed as a Caribbean social and racial complexity, thus Wide Sargasso Sea constitutes a rewriting of the Self and Other in terms of postcolonialism. For the healing of the colonised personality, the colonial Other becomes the postcolonial Self.

The purpose of my thesis is to analyse the colonial status of the female protagonists of Jane Eyre and Wide Sargasso Sea, and the phenomenon and effects of postcolonialism presented in the novels, since these notions can be recognised in both of them. The main focus of my analysis will be the female characters Jane Eyre and Bertha Mason from Jane Eyre, and Antoinette Cosway from Wide Sargasso Sea. Furthermore, the novels share the same male protagonist, Rochester, who I will be also discussing subsequently. The presence of colonialism can be found in his character, especially by his approach towards the female protagonists. To ensure accuracy, I have limited myself to take the British Victorian society and the postcolonial approach to the novels as the theoretical base of my analysis. The theoretical base is essential to examine and discuss the phenomenons in both of the novels. In the part of British Victorian society, I will be discussing contemporary women’s roles and lifestyles particularly, and in further details their manifestations in Jane Eyre, as it is especially important to analyse the role of the governess and how it is represented in Victorian governess novels generally. I would like to point out similarities by examining and comparing the status of Victorian governess seen as Jane’s position in the novel. The observation of gender hierarchy and class oppression allows the discussion of the novel from a postcolonial perspective as well by comparing Jane and the other foreign women, especially Bertha, as a coloniser and colonised Other. The contrast of the coloniser and colonised can also be seen in the analogies of Rochester’s masculine domination and colonial domination.

In further chapters, I will also link the postcolonial phenomenons of Wide Sargasso Sea, since this novel gives the base of postcolonialism in the form of Antoinette, who represents many aspects of colonisation. Particularly the concept of cultural identity is a complex issue and has been one of the central concerns of postcolonial literary criticism. Creoleness can have a major impact on a person’s cultural identity, as one is contrasted with sometimes very different cultures in terms of their own identity. Gender also plays an important role in the formation of one’s cultural identity. Especially in the case of female identity, it is the form of conforming or not conforming to the expectations other people in the society have for an individual. At the end of the thesis, I will compare the characters of the main analysis status and attitude to show the differences of the coloniser and colonised.

CHAPTER I: Colonisation and the Concept of the Other in the Institution of Marriage from a Victorian and 20th-century Perspective

The Victorian era of Britain was characterised by rapid changes and developments in nearly every sphere of life. The main change, which affected and altered the country’s mood was especially the change of population growth and location by colonisation. The expansion of Britain with the colonisation of the Caribbean islands and other subsequent British colonies had a great impact on the culture and identity of the British Empire. This resulted in the reinforcement in the distinction between the multicultural British subjects and the racially, culturally and religiously homogeneous Britons who possessed the quality of Englishness. The demacration within the empire also allowed the British to deny the darker parts of their national history.

Postcolonial theory is concerned to analyse and theorise the permanent impact of nineteenth-century European colonialism. Several conclusions can be drawn from the central features of postcolonial theory. One of its central feature is that it examines the impact of the European conquest, colonisation and domination of non European people and cultures. It concentrates on the domination of the coloniser and its use of power to control the colonised in occupied territories. Colonial discourse theorist Edward Said discusses postcolonial theory in his books Orientalism (1978) and Culture and Imperialism (1993). Said analysed the various cultures that were affected by the nineteenth-century imperial expansion, and argued that the West produced the other cultures as an Other to a Western norm. For example, these other cultures were represented as not only different from British culture, but also as negatively different. Other people were described as lazy, degenerate and uncivilised as opposed to the civilised, hard-working British.

Sander L. Gilman in his book, Difference and Pathology: Stereotypes of Sexuality, Race and Madness (1985), explains that racial stereotypes have been often connected to images of pathology. A group’s history becomes uniqe and different from others on the basis of a mutually defined sense of identity which creates cohesion. The sense of difference between the self and the Other is built on the basis of ‘Xenophobia’ which is inherent in all groups: nearly all groups are inclined to define themselves as ‘good’ and others as ‘bad’ (Gilman, 1985, p. 129). The Western definition of the ‘racial’ Other has been bound to the black, who stand for the antithesis of whites. Unrealistically, blacks are linked with qualities of Otherness, among other things like moral and physical sordidness, disorder and danger. Moreover, there is an association of the black with the ‘myth of mental illness’ in the West: the idea of racial difference is indicated in the defining group’s label as ‘mad’, because of the presumption of the sanity as ill tendency of the black Other. European ‘healthiness’ has been fundamentally opposed to the colonial Other’s tropical world which ‘ill’ black becomes ‘infectious’ among white colonists (Gilman, 1985, p. 129-130).

Said suggests that nations could be viewed as narratives in which they represent themselves. The superiority and desirabilty of Englishness found a narrative voice in the English novel as a central message. According to Said, English novels proved ‘immensely important in the formation of imperial attitudes, references, and experiences’ (1993, p. xii). Englishness quickly became the dominant narrative of the entire British Empire, since the English novel dominated the literary scene in both Britain and its colonies throughout the nineteenth century (Said, 1993, p. xiii). As he notes, ‘never, in the novel, is that world beyond seen except as subordinate and dominated, the English presence viewed as regulative and normative’ (Said, 1993, p. 75).

Postcolonial theory covers a very wide range of theoretical concerns and critical perspectives. To be more specific, colonialism can also be viewed from its gendered nature, which is examined by postcolonial feminist theory. It studies how women and men are positioned in a male dominant society, and how both of the genders are presented in colonised territories and in western locations. Stevi Jackson and Jackie Jones’s Contemporary Feminist Theories (1998) is a collection written by feminists based in Britain that features Sara Mills’s Post-colonial Feminist Theory in which she makes note of how men used power in colonial territories. Men used their power to be with other colonised women while women were expected to be faithful to their husbands in England and abroad. The sexual fantasies of Victorian Englishmen were not in accordance with the established Victorian morals in the way they acted on them. That is, British men took advantage of their power and their position to gratify themselves and had sexual relationships with natives of the colonised territories as they were being positioned high on the hierarchal ladder (Jackson & Jones, 1998, p. 100). John McLeod discusses power, marginalisation and the oppression of women in his book, Beginning Postcolonialism (2000). He explains that both feminism and postcolonialism ‘share the mutual goal of challenging forms of oppression’ (McLeod, 2000, p. 174). McLeod also includes a major key concept here, the term patriarchy. Patriarchy refers to ‘those systems ‘ political, material and imaginative ‘ which invest power in men and marginalise women’ (McLeod, 2000, p. 174). He explains that patriarchy refers to male power over women, and is connected to feministic thoughts about how women are made to feel oppressed and subordinated (McLeod, 2000, p. 173). Connected to partiarchy, another concept of postcolonial feminist theory is double colonisation. It refers to the ways in which women have simultaneously experienced the oppression of colonialism and patriarchy. McLeod notes that ‘Colonialism can add other kinds of patriarchal systems to an already unequal situation’ which means that in the same time women are being doubly oppressed by patriarchal ideology and imperialistic ideology (2000, p. 177, original emphasis).

The postcolonial feminist perspective makes it possible to examine how character pairs such as Jane-Rochester and Bertha/Antoinette-Rochester from Jane Eyre and Wide Sargasso Sea are represented in the notion of colonisation, and in this way, positioned as a woman and a man in their relationship and marriage. However, it is essential to examine the gender roles in British Victorian society at first, for the sake of a substantiated comparison of the novels the era they are situated in. The examination will concentrate on particularly the female roles to compare the status of the protagonists of the two novels, which then can be contrasted with the image of the colonised Other and the coloniser that I will be discussing in part three of this chapter.

I.1. Female Roles in British Victorian Society

There are inadequate associations with the Victorian era of British history, such as ‘prudish,’ ‘repressed,’ and ‘old-fashioned’. This age saw great expansion of wealth, power and culture, and therefore it is considered as a long period of peace, prosperity with refined sensibilities. The Victorian period is dated between 1837 and 1901 with wide-ranging, fundamental social changes. At that time, the population of England represented various classes, occupations, and ways of life.

Until the Victorian era, the vast majority of women devoted themselves to housework and running the household, but from this time, they were also part of the ‘external’ workforce. Dr Lynn Abrams in Ideals of Womanhood in Victorian Britain (2001) points out the changes in the private and public sphere: ‘New kinds of work and new kinds of urban living prompted a change in the ways in which appropriate male and female roles were perceived’ (Abrams, 2001, p. 1). She also describes the separate spheres that included the private sphere of the home and hearth for women, and for men the public sphere of business, politics and sociability. The notion of the separate spheres ‘came to influence the choices and experiences of all women, at home, at work’ (Abrams, 2001, p. 1). Abrams also notes that ‘the ideology that assigned the private sphere to the woman and the public sphere of business, commerce and politics to the man had been widely dispersed’ (2001, p. 1). In further details, Dr Nancy Reagin describes in her essay, Women as ‘the Sex’ During the Victorian Era (n.d.) how Susan Kent observes the separate spheres’ framework in Sex and Suffrage in Britain 1860-1914 (1990): ‘Men possessed the capacity for reason, action, aggression, independence, and self-interest [thus belonging to the public sphere]. Women inhabited a separate, private sphere, one suitable for the so called inherent qualities of femininity: emotion, passivity, submission, dependence, and selflessness, all derived, it was claimed insistently, form women’s sexual and reproductive organization (Kent 30)’ (n.d.). The nineteenth-century society came to regard women as ‘the Sex’ because ‘women were so exclusively identified by their sexual functions. (32)’ (Reagin, n.d.). In parallel, women’s fashion became more sexual, that is, to emphasise the waist and buttocks, and thrust out the breasts by wearing corset and crinoline. ‘The female body was dressed to emphasise a woman’s separation from the world of work’ (Abrams, 2001, p. 3). The ideal woman of the time was not weak or passive but rather busy with her moral duty towards the family and society. She was an able figure who gained strength from her moral superiority.

Marriage was considered as the supreme means of livelihood for women in the nineteenth century. It was simply a necessity for survival, thus normally they did not have an option not to marry. No matter what they desired, they relied on their husband economically. Women had to control their sexuality and as a potential wife, they were expected to be innocent and virgin. Men, on the other hand, were not expected to be chaste and pure, as the potential husband had the freedom to participate in premarital and extramarital sexual relationships. Reagin analyses women’s sexuality in the second paragraph of her essay in which she notes that ‘Such a biased idea was one of many double standards in Victorian society, which demanded unquestionable compliance from women and none from men, since the women were thought to be controlled by their sexuality and were thus in need of regulation’ (n.d.). After marriage, women were under the complete supervision of her husband by law. In another essay, Women’s Status in Mid 19th-Century England (n.d.), Helena Wojtczak describes that women ‘had to obey men, because in most cases men held all the resources and women had no independent means of subsistence’ (Wojtczak, n.d.). In case of an unhappy relationship, women could not divorce from their husband, and even if she ran away, the police would capture and return her to the husband who could imprison her as a last resort. This was all sanctioned by the law, custom and history, and also approved by society in general. Wojtczak focuses on moral inequality by stating that ‘Mere adultery was not grounds for a woman to divorce a man; however, it was sufficient grounds for a man to divorce his wife’ (n.d., original emphasis).

Motherhood was considered as ‘a substitute for women’s productive role’ (Abrams, 2001, p. 6). It was no longer a reproductive function, but also symbolised as a domestic ideal, including the mother and her children, which meant emotional fulfilment for women. Motherhood was becoming a social responsibility and it could not be combined with paid work. Working-class mothers were considered as neglectful and irresponsible, while they strove to earn their daily steady income to combine the demands of childcare and putting a meal on the table. Abrams explains that ‘Motherhood was expected of a married woman and the childless single woman was a figure to be pitied. She was often encouraged to find work caring for children – as a governess or a nursery maid – presumably to compensate her for her loss’ (2001, p. 6).

I.2. The Governess

The phenomenon of the governess was created by the middle- and upper-class for educating girls at home. In 1851, the number of governesses in Britain was calculated at 21,000. The governesses’ role was such an employment category for women that required higher position in birth, mind and manners to fit for their position. However, they were considered inferior in wordly wealth, which followed humiliation and psychological cruelty they had to endure. Philip V. Allingham describes the status, position, life and role of the governess by taking Ronald Pearsall’s Night’s Black Angels: The Forms and Faces of Victorian Cruelty (1975) as a base of his essay, The Figure of the Governess, based on Ronald Pearsall’s Night’s Black Angels (2000). Allingham states that ‘Because the supply of goveresses was far greater than the demand, many of the more desperate girls would do the job for nothing, just to get a roof over their heads’ (2000). He continues saying that ‘The duties of a governess, especially one employed by a family of the commercial middle class (which often delighted in degrading someone of superior ‘breeding’), were dreary and disenchanting. As a special treat, the governess might be allowed to enter the parlour, but she would take her meals in the schoolroom’ (Allingham, 2000). It can be said that the governess was at the same level of children and servants if not lower. For men, she was a tabooed woman, and from superior oppression, tradesmen took revenge on governesses who were also treated as spite of the upper classes.

Governesses would be tormented by refusing to do her lessons, playing with the children and in the worse case, throwing her tools and belongings into the fire. Larger children might even assault their governess and in the worst case they might try to harass her sexually. However, it has to be noted that not all employers were tyrants, and cruelty was often a matter of self-protection on the part of the governess. There were parents who were wise enough to ‘treat their governesses with kindness, for the employers’ cruelty could rebound on their own children’ (Allingham, 2000). Altogether governesses were not treated as bad as the lower classes, because ‘the practitioners of the profession were never treated with the gross brutality which factory girls, apprentices, and mine-workers habitually endured’ (Allingham, 2000).

Loneliness and neurosis was the result of social isolation. The proportion of governesses among inmates of lunatic asylums were quite high, therefore it was necessary to verify the relationship between the governess and the pupil. There were moves to change the governess’ destiny more pleasant and less burdensome. The aim of a governess was to start a school of her own, but failures generally outnumbered successes.

I.2.1. The Victorian Governess Novel

The governess was a common figure of the period and the novels including governesses can be connected to the general anxiety of middle-class female employment and in particular, governess work. Governesses began to debate from the 1840s focusing on their situation, social positions, terms of employment and salaries. The novels had a substantial part in their debate, giving a focus on the state of these women.

As for the characteristics of the governess novel, its important feature is that it portrays progress towards maturity or improvement on the part of the governess heroine who is often an orphan and stands alone in life. Cecilia Wads?? Lecaros discusses the novel’s structure and plot in her essay, The Victorian Governess Novel: Characteristics of the Genre (2005) based on part of the introductory chapter of her book, The Victorian Governess Novel (2001). She describes that the heroine is ‘not necessarily a faultless or particularly splendid character, but a protagonist on whom the narrative is centred and with whom the reader’s sympathy lies’ (Wads?? Lecaros, 2005). Furthermore, she notes that ‘The heroine generally encounters a number of painful situations that are connected with her position as a governess. Usually she faces trouble in relation to her employers or her pupils, and servants and visitors often make her miserable’ (Wads?? Lecaros, 2005). Concerning the movement of the heroine, in some novels she stays in one particular place in the major part of the novel, and in others she goes through different situations. Cecilia Wads?? Lecaros explains that ‘A convincing development in character could be achieved by moving the heroine from one situation to another’ (2005). In relation to other female members of the household, the governess’ position is also determined by them, and she is at the centre of attention since she is a middle-class wage-earner woman. This dependent and wage-earner position resembles a domestic servant but other similarities can be noted with the mistress of the house because of her middle-class background. However, because of her accomodation and salary, she is still considered as a servant by her employers rather than equal. Usually female rivalry, as an important theme, is present.

The theme of reversed fortune is dominant in these novels. Although a majority of the novels represent shocking conditions of the governess, there are exceptions such as pleasant employers, and most novels contain a maternal character or a future husband who helps the governess through her difficulties.

I.3. The Relationship of the Other and the English in Jane Eyre and Wide Sargasso Sea

In Jane Eyre, the conflict between European and West Indian consciousness is a conflict between conventional attitudes and emotional excesses on the surface level. In Wide Sargasso Sea, this particular conflict is worked out through the same fatal relationship but from various points of view. In contrast to Jane Eyre, the conflict of the two cultures becomes the crucial subject of the narrative and of its psychological, social, historical and geographical aspects. The escalation of the conflict is contributed by an important device of characterization in the novel, the so called ‘projective method’ of landscape description.

The wintry landscapes that form the setting of Jane Eyre are contrasted with the typical Romantic topography of the summery climate of the West Indies. Much of the own nature of Rochester becomes revealed with his response to the surrounding environment: his sobriety reflects his fear of passion and dependence on the security of the civilised world, while Antoinette’s love of the Caribbean landscape corresponds with her passionate emotions. The changing attitudes to particular places reflect the development of the mutual relationship of Rochester and Antoinette. Antoinette’s statements represent confidence and identification: ‘This is my place and everything is on our side’; ‘This is my place and this is where I belong and this is where I wish to stay’ (WSS, p. 67, 99), but on Rochester’s side, it shifts to estrangement: ‘I feel very much a stranger here,’ (…) ‘I feel that this place is my enemy and on your side’, to which Antoinette responds: ‘You are quite mistaken (‘). It is not for you and not for me. It has nothing to do with either of us. That is why you are afraid of it, because it is something else’ (p. 117). Rochester’s feeling of uneasiness originates from the experience of a particular environment as something else which results in the inability to accept the other: the other landscape with the other culture and the other individual.

As Edward Said’s remark was previously mentioned at the beginning of the chapter, there was a process of constructing English cultural identity where the natives and the colonised were seen as Other and the English as superior. This particular manifestation can be seen in Rochester’s definition of Antoinette as a colonial Other, one who is uncultured, narrow-minded and uneducated. During the couple’s discussion of England in one scene, Mr. Rochester notices Antoinette’s inability to describe or give any true facts of his beloved homeland, he reflects to her lack of knowledge: ‘She was undecided, uncertain about facts ‘ any fact. (‘) hardly able to believe she was the pale silent creature I had married’ (p. 80). He becomes irritated and angry by his wife’s ignorance of England, when Antoinette speaks of it as a land of gloom and coldness. Rochester concludes that Antoinette is uncultured and unrefined because of the lack of knowledge on her husband’s powerful and civilised country. As Said stated, the judgment of the Other was opposed to the powerful, cultured and morally righteous English people. Antoinette is considered as a mere Creole in Rochester’s colonialist eyes. The wide gap between them also comes from the fact that Antoinette has a limited knowledge of the world, as she does not know much about her own homeland either when she is asked about the island’s snakes. Rochester, thus, categorises her as a colonial subject, as Other.

Antoinette’s rage can also be seen as her own way of rejecting Rochester’s dominance and the many years of being colonised. She refuses being prevailed over her by raging and screaming at her husband. In Colonialism/Postcolonialism (2005), this certain attitude is explained by Ania Loomba that ‘within the frameworks of psychoanalytic discourse, anti-colonial resistance is coded as madness’ (p. 119). Antoinette realises that Rochester is trying to colonise her by overpowering and dominating over her. As a coloniser, Rochester also wants to change her to act and behave as a Victorian lady who stays at home and obeys only him. Loomba points out that changing the mentality of the colonised has been one of the aims of colonisation which involved the alteration of their mind and led to madness, since it ‘dislocated and distorted the psyche of the oppressed’ (2005, p. 123).

Although Rochester explains that ‘disgust was rising in me like sickness’ because of Antoinette’s free sexuality and promiscuity, he gives himself the right to attend to his sexual needs by committing adultery as a man in a patriarchal institution (WSS, p. 114). Colonisers showed power and authority of the patriarchy by having sexual encounters with natives, as Sara Mills remarked (Jackson & Jones, 1998, p. 100). This authority can be seen in his exercise of his colonial power by satisfying his sexual needs with native Am??lie, knowing well that his act can hurt Antoinette. He shows no regrets or remorse for his action, ‘I had not one moment of remorse. Nor was I anxious to know what was happening behind the thin partition which divided us from my wife’s bedroom’ (WSS, p. 127).

As mentioned before, Rochester can be recognised as a double coloniser from the postcolonial feminist perspective because he tries to be in control and oppress Antoinette. Rochester is considered to be a double coloniser since his power comes from both patriarchal and colonial ideology. He has control over his wife’s wealth and changes her identity by oppressing her and labeling her mad. Rochester sees Antoinette as uncultured and uncivilised, one who has different values and behaviour which is against his morals and principles, when he perceives her as Creole, not English (WSS, p. 61). Thus it can be concluded that Antoinette, as McLeod implies, is living under the negative effects of both patriarchy and colonialism (2000, p. 175).

McLeod notes that ‘Names are often central to our sense of identity’ (2000, p. 167). As Antoinette remarks ‘Names matter’, and throughout the novel, we can see her becoming more and more confused by her identity (WSS, p. 162). Mr. Rochester and Antoinette fail to understand each others’ culture and behaviour, thus the couple is led to a loveless marriage by this lack of understanding, where they hurt each other by attacking the other verbally and at times physically. Mr. Rochester calls Antoinette by other names even though she makes it clear that she wants to be called by her real name, ‘not Bertha’ (p. 123): ‘Bertha is not my name. You are trying to make me into someone else, calling me by other name’ (p. 133). By not calling her by her real name and not showing love and affection in words, Rochester confuses Antoinette’s identity as well as her mental state in the end: ‘What am I doing in this place and who am I’? (p. 162).

In Jane Eyre, there are hints of ambiguity of Bertha’s race even in Rochester’s account of the time before their marriage. Although, after Rochester describes Bertha as ‘tall, dark, and majestic,’ he immediately continues: ‘Her family wished to secure me, because I was of a good race’ (p. 305). In this context the phrase suggests that Bertha may not be of as good race as he. Rochester’s phrase acquires significance in the historical context of a colony where blacks outnumbered whites by twelve to one, and where white planters practiced an accepted routine of forcing female slaves to become their concubines. *p. 105 Richard Mason unnecessarily declares Bertha is the daughter, as Richard Mason oddly and apparently unnecessarily declares in his official attestation to her marriage with Rochester, “of Jonas Mason, merchant, and of Antoinetta Mason, his wife, a Creole” (p. 318).

Rochester’s comparison of the two women distinguishes the bestial Creole from the human Englishwoman. Once at Thornfield, Jane encounters Mr. Rochester who later serves as the most significant indicator of difference between Jane and Bertha/Antoinette. While the colonised Bertha/Antoinette can be successfully imprisoned by Rochester in a room on the third floor of Thornfield, Rochester cannot force Jane to stay once she discovers his disastrous marriage. In the final chapter of Wide Sargasso Sea, Antoinette recalls offering all she has to Rochester in exchange for her freedom and being denied (WSS 115). However, Jane never needs to ask Rochester to release her, and instead is described as a ‘resolute, wild, free thing’ who leaves without his knowledge (JE 357). In both texts patterns of English freedom arise as Rochester and Jane are both able to break away from unhappy situations in their lives. Rochester is able to escape his miserable marriage to Antoinette by locking her in a ‘cardboard world’ (WSS 115) where he can ”wait ‘ for the day when she is only a memory to be avoided, locked away, and like all memories a legend. Or a lie” (WSS 113). He is able to live the life of a bachelor, roaming Europe and taking mistresses, while disowning both his wife and their marriage. Jane is similarly able to escape from her awkward and unhappy situation at Thornfield after her illegitimate marriage to Rochester is halted, by sneaking away in the early morning unbeknownst to the house (JE 360).

In the final chapter of Jane Eyre, Jane’s narrative voice resounds with her newfound marital serenity. 96 She and Rochester are finally able to be together as Jane’s social rank and intellect are found to be congruent with Rochester’s. She has finally achieved independent wealth to complement her preexisting marks of Englishness: birth, education, modesty, and intellect. Jane exemplifies the British restraint Rochester possesses in Wide Sargasso Sea, and repeatedly asserts their intellectual compatibility saying, ‘I have something in my brain and heart, in my blood and nerves, that assimilates me mentally to him’ (JE 199). While Rochester describes Bertha as having a ‘nature wholly alien to mine,’ (JE 434) he claims Jane as his appropriate bride ”’because my equal is here, and my likeness,” (JE 285). It is their shared Englishness ”that paves the way for a ‘happilyeverafter’ conclusion premised on notions of a natural law of cultural and spiritual compatibility and ‘congruous union.” However, this ending can come about only after the removal of the colonial contagion (Bertha) from Rochester and Jane’s relationship. Although Rochester’s blindness and amputated hand can be seen as the scars he must bear as a consequence of his involvement in the colonial project, Bertha’s death also signals the beginning of Rochester’s repentance and absolution from colonial sin. Not only does he pledge ”’to lead a purer life than I have done hitherto,” (JE 497) but he atones for the threat his Creole marriage posed to the hegemony of the Empire by having a purely English son with Jane, thereby defending and perpetuating the imperial patriarchal order.

It is this emphasis on English domesticity that gives the myth of Englishness its power: claiming it as something solely for the nationstate ‘ never imperial ‘ and unattainable for the colonial Other. Both Jane and Rochester are rewarded for their homogenous union and for policing the borders of Englishness with the return of Rochester’s eyesight so that he may bear witness to the continuation of the English patrilineal order which his former marriage nearly jeopardized (JE 501). Through this symbolic act of healing, Bront?? conveys the power of Englishness to erase traces of colonial contamination.

CHAPTER II: The Image of the Other Woman in Jane Eyre

Charlotte Bront’??s novel, Jane Eyre narrates the story of a character’s internal development as she undergoes a succession of encounters with the external world. It emphasizes love and passion, represents the notion of lovers destined for each other, and it uses mysterious, supernatural, horrific and romantic elements. It also contains elements of social criticism, with a strong sense of morality at its core. Furthermore, the novel explores classism, sexuality, religion and proto-feminism.

Jane Eyre is set in the 19th century England, and it tells the story of Jane Eyre, an orphan. The whole novel is narrated by the protagonist, Jane, who grows up in an obnoxious family, then spends the rest of her childhood in Lowood School due to being rejected by her relatives. She also explains further her adulthood, how she became a governess at Thornfield Hall, and got into a closer relationship with her employer, Edward Rochester. Soon, Jane has more complications and obstructing problems than she had thought. One of the nuisances becomes Bertha Mason, a Creole woman locked up in a mysterious room of Thornfield Hall. It turns out that she is the wife of Mr. Rochester, and also called as the ‘madwoman in the attic’.

Colonialism is present in the figurative use of race in Jane Eyre. The figure is enacted on the level of character, representing a Jamaican black woman in the case of the novel. This new reading of the novel emerges by exploring the authorial choices of Bront?? as the decision to have a Creole madwoman as Jane’s foil. Since the reinforcement of English superiority was considered normal during the nineteenth century, a reading through the lens of Englishness could suggest that Bront’??s aim was to highlight Jane’s Englishness with the choice of a Creole woman. In this way, Bront?? confronts the non-figurative reality of British race relations. In a literary criticism, Colonialism and the Figurative Strategy of ‘Jane Eyre’ (1990), Susan L. Meyer notes that the ‘figurative use of blackness in part arises from the history of British colonialism: the function of racial ‘otherness’ in the novel is to signify a generalized oppression’ (p. 250). She continues stating that ‘Bronte makes class and gender oppression the overt significance of racial ‘otherness,’ displacing the historical reasons why colonized races would suggest oppression, at some level of consciousness, to nineteenth-century British readers’ (Meyer, 1990, p. 250). The metaphor of slavery can be considered as the identification with the oppressed, as an implicit critique of British domination. Also, the historical alliance between the ideology of male domination and the ideology of colonial domination resulted in a very different relation between imperialism and the developing resistance of nineteenth-century British women to the gender hierarchy.

The portraying of the colonial Other involves the silencing of the voice and speech of the oppressed. Bront’??s link to a culture of silence speaks for her decision to keep Bertha silent aside from her lunatic laughter and bestial manifestations. Spivak touches upon this culture of silence when she discusses the ‘subaltern’ as being a position without identity and the inability of action. She states that the subaltern cannot represent itself through a narrative voice but is always being represented by others and pushed into the dominant preexisting metanarrative ‘ in this case, British imperialism. Thus, Jane and Rochester are both given a voice as they represent the metanarrative of British imperial history, while Bertha is condemned to subalternity and silence. Eliana Ionoaia explains this notion in her essay, The Creolization of the Self ‘ From Jane Eyre to Wide Sargasso Sea (2008) in the followings:

In Jane Eyre, a silence is created where Bertha’s voice should be heard, however the Creole woman’s voice is missing from the text and the space is filled by the voices of Edward Rochester and Jane Eyre. The muted voice of the Creole woman, a victim of twofold domination ‘ on the one hand, from the colonial social structure and, on the other hand, from the Creole male ‘ identified as a lunatic in Jane Eyre (‘) (p. 137).

Colonialist and anti-colonialist* messages can be recognised and considered as a theoretical approach to the novel. Common ideas suggest that the colonised are inferior, immoral, savage and uncivilised. The postcolonial approach to Jane Eyre should begin with considering some of the following questions suggested by Karin Jacobsen and Mary Ellen Snodgrass in their critical essay, A Postcolonial Approach to the Novel (n.d.): ‘What does the novel reveal about the way cultural difference was represented in Victorian culture? What idea does the text create of ‘proper’ British behavior’? (n.d.). Answers to these questions can be discovered by examining the foreign women, especially Bertha Mason and the ‘colonialist’ Jane.

Creating a prototype of the proper English woman is one of the colonialist goals of the novel. The ideal is created by contrasting Jane with other foreign women in the text. For example, French C??line Varens and her daughter, Ad??le are constantly criticised throughout the novel for their materialist and superficial nature. These traits reveal in such use of expressions of Rochester, when he says that ‘she charmed my English gold out of my British breeches’ pocket’ (JE, p. 139). Jane’s final comments about Ad??le also suggest that ‘only through a good English lifestyle has Ad??le avoided her mother’s tragic flaws’ (Jacobsen & Snodgrass, n.d.): ‘a sound English education corrected in a great measure her French defects’ (JE, p. 450).

Bertha Mason represents British fears in the form of an insane Creole woman, whom her husband, Edward Fairfax Rochester, keeps locked up on the third floor of his mansion. The imprisonment in the attic refers to colonialist tyranny where Bertha functions as the victim of colonialism. Rochester associates the two of the most common nineteenth-century black stereotypes with Bertha: madness and drunkness. Bertha’s inside is just like her outside; Bront?? reduces her to a ‘foul German spectre ‘ the Vampyre’ (JE, p. 284). Her vampiric appearance suggests that she is sucking the blood and vitality away from Rochester. Jacobsen and Snodgrass explain that ‘Their arguments suggest Rochester isn’t as innocent as he claims; as a colonialist, he was in the West Indies to make money and to overpower colonized men and women’ (n.d.). In addition, Joan Z. Anderson states in her essay, Angry Angels: Repression, Containment, and Deviance, in Charlotte Bront’??s Jane Eyre (2004) that ‘Bront?? utilizes the metaphor of race to signify the oppressor throughout the novel, paradoxically, she also uses racial otherness to characterize and essentially vilify Bertha Mason–a woman no longer in any position to oppress anyone’ (2004).

In the following passage, the basic concept of racial Otherness can be recognised as Jane describes the features of Bertha to Rochester:

It was a discoloured face ‘ it was a savage face. I wish I could forget the roll of the red eyes and the fearful blackened inflation of the lineaments!

Ghosts are usually pale, Jane.

This, sir, was purple: the lips were swelled and dark; the brow furrowed; the black eye-brows widely raised over the bloodshot eyes (JE, p. 283-284).

This passage clearly shows that Bertha is stereotypically marked as non-white with the emphasis on her coloring. The attributives, ‘discoloured’, ‘purple’, ‘blackened’, and the reference to rolling eyes and to ‘swelled’, ‘dark’ lips all strongly suggest that she is certainly not ‘pale’. The discoloration of Bertha’s ‘blackened’ and inflated lineaments implies colonial sickness and contamination. Moreover, Bertha’s features are emphasised and based on Jane’s use of the word ‘savage’, and the redness which she sees in Bertha’s rolling eyes suggests the drunkenness which, following the common racist convention, Bront?? has associated with blacks.

Bertha’s madness is compared to an animal which was captured and locked up in its attic prison. She is monstrous, bestial and uncivilised who is only capable of ‘snarling, canine noise’ (p. 210). Bertha possesses both canine and feline tendencies as a ‘tigress’ and a ‘clothed hyena’ (p. 212, 293) who remains untamed even in isolation, as it turns out when she attacks her brother, Richard Mason and sucks the blood out of him. She refuses to be controlled: she displays almost equal stature and masculine force of her husband’s, she even fights with Rochester. In the middle of her essay, Anderson characterises Bertha’s embodiment of unwomanliness and masculinity as ‘the unfeminine aspect of both anger and madness which threatens masculine control of Victorian society’ (2004). Bertha is not submissive, thus she must be contained.

Jacobsen and Snodgrass explain that ‘Jane’s position is more conflicted than Rochester’s’ because ‘as a woman she is also a member of a colonized group, but as a specifically British woman, she is a colonizer’ (n.d.). Jane emphasises the colonised status of women when she claims that Rochester’s smile was as a sultan would ‘bestow on a slave his gold and gems had enriched’ (JE, p. 269). Bront?? marks Jane as morally and racially superior to Bertha, although Jane feels sympathy towards her. She can govern herself, has learned to self govern and ‘follows the dictates of her society which tells her to ‘flee temptation’ (340). Much of what places Jane above Bertha hinges on the vociferous repulsion that female sexuality elicits in Victorian society’ (Anderson, 2004). Anderson also explains that ‘Bertha’s construction as ‘black’ associates her with imperialist beliefs of lasciviousness’ and with ‘xenophobic anxieties . . . aroused by Bertha’s foreign heritage’ which connect to beliefs of ‘racially inflected sexuality’ (2004). Jane represents the civilised female, which is confirmed by her whiteness.

II.1. Gender Hierarchy and Class Oppression

Bront’??s narrative represents some of the suffocating restrictions imposed upon nineteenth-century women and none of her females achieves true independence or freedom. In Jane Eyre, Bront?? responds to the seemingly inevitable analogy in nineteenth-century British texts that assert the need for white male control which Susan L. Meyer describes with an analogy between the degradation of both white and black women, and the shared oppression of hierarchies of social class and gender. She states that ‘Bront?? uses the analogy in Jane Eyre (…) to signify not shared inferiority but shared oppression. This figurative strategy induces some sympathy with blacks as those who are also oppressed, but does not preclude racism’ (Meyer, 1990, p. 251).

Meyer continues saying that ‘Bronte does not use slavery as an analogy for the lot of the working class but for that of the lower-middle class, for those who are forced into ‘governessing slavery’ as Rochester puts it (p. 270)’ (1990, p. 257). Jane experiences the dehumanizing regard of her class superiors at Thornfield Hall only when Rochester arrives with his ruling-class company, thus at this point her governessing becomes like slavery. Before this, Jane was treated as a social equal around her. When she first arrives, Mrs. Fairfax helps Jane remove her bonnet and shawl, and her pupil, Ad??le is too young and also of too dubious an origin to treat her governess with superiority. Meyer explains that Bront?? constructs a utopian atmosphere which is not dominated by class hierarchy between the three of them. In this passage, after Mrs. Fairfax marks Jane as her companion, she clearly excludes the working class from this classless utopic world: ” but then you see they [Leah and John] are only servants, and one can’t converse them on terms of equality: one must keep them at due distance, for fear of losing one’s authority’ (JE, p. 97).

Parallels can be drawn between slavery and Jane’s social position as one of the disempowered lower-middle class. These analogies are drawn by Jane not in response to the work she has to perform as a governess but in response to the humiliating attitudes of her class superiors. From childhood, Jane feels like a slave which is expressed in a rebellious behaviour in her tenth year, just like Bertha who after ten years in her third floor room ‘br[eaks] out, now in fire and now in blood’ (p. 210). Postcolonial critics argue that Jane can only achieve self-identity by the sacrifice of Bertha, the foreign woman. Still as an adult, Jane also feels like a degraded slave when she realises the economic inequality between her and Rochester, who overstocks her with valuable gifts after the engagement. When Rochester tells her she will cover her face with a veil at the wedding, she claims that she will feel like ‘an ape in a harlequin’s jacket’ (p. 259). The racist nineteenth-century association of blacks with apes assumes that Jane refers to Bertha’s black face under the veil. Meyer notes that ‘Bront?? uses the emotional force of the ideas of slavery and of explosive race relations following emancipation in the colonies to represent the tensions of the gender hierarchy in England’ (1990, p. 259).

In the novel, analogies can be seen between dark-skinned peoples, the black slaves that are associated with oppression, and those oppressed by the hierarchies of social class and gender in Britain. The historical presence of colonialism and slavery can be associated with the narrative function of the dark-featured Bertha, however, the association between blacks and apes ‘ as mentioned before ‘ indicates that these analogies are not free from racism. The use of slave as a metaphor would naturally imply the oppression of the non-white races subjected to the British Empire, although there is the opposite context in the novel, which can be observed in the descriptions of Blance Ingram.

Class oppression is especially represented by the character of Blanche Ingram with her ‘dark and imperious eye’ (JE, p. 185). Interestingly she shares many similar features with Bertha Mason. Blanche’s darkness is emphasized with her ‘olive complexion, dark and clear’, ‘hair; raven-black’ (p. 159), and as Jane notes she is ‘dark as a Spaniard’ (p. 173). The odd phrase ‘dark and imperious’ signifies connection to inferior and dark races, but in the case of Blanche’s description with the use of the word ‘imperious’ points to her ruling-class sense of superiority which evokes the contact between the British and their dark-skinned imperial subjects. Meyer explains that ‘In that contact, it was not the dark people who were ‘imperious’, that is, in the position of haughty imperial power, but the British themselves’ (1990, p. 260). By examining further the qualities of darkness and imperiousness in Blanche, she notes:

‘imperialism brings out both these undesirable qualities in Europeans ‘ that the British have been sullied, ‘darkened’, and made ‘imperious’ or oppressive by contact with the racial ‘other’, and that such contact makes them arrogant oppressors both abroad, and, like Blanche, at home England’ (Meyer, 1990, p. 260).

Although the ‘spotless white’ dress she wears, her mother’s calling ‘my lily-flower’ (JE, p. 173, 178), and the meaning of her name ‘ white, Blanche does not embody ideal white European femininity, but rather ‘the contagious darkness and oppressiveness of British colonialism’ (Meyer, 1990, p. 260).

Blanche, as well as Bertha, also represents a type of fallen women in some respects. It can be recognised from her social status and position, which demands to ‘prostitute’ herself for material gain ‘ much like C??line Varens. However, she has a respectable place, she exists within patriarchal restrictions, and her social position requires a profitable marriage. Anderson describes that there are certain shared characteristics between Blanche Ingram and Bertha Mason, such as the ‘function as marital commodities in the upper-class marriage market, and as such both hold social positions that require them to marry’ (Anderson, 2004). She continues saying that ‘Although Blanche is not physically imprisoned by four walls, she possesses few more options than does Bertha.’ (Anderson, 2004). However, the failed engagement with Rochester conceivably comes from the fact that Rochester associates Blanche Ingram with his detested wife.

II.2. Analogies between Male Domination and Colonial Domination

Jane Eyre carries the ideological context of the historical alliance between the ideology of male domination and the ideology of colonial domination. In the part of the novel when Bertha sets fire on Thornfield Hall signifies the slave uprisings in the British West Indies, where ‘slaves used fires both to destroy property and to signal to each other that an uprising was taking place’ (Meyer, 1990, p. 254). ‘When Bertha escapes from her ten years’ imprisonment to attempt periodically to stab and burn her oppressors’, Meyer notes, ‘she is symbolically enacting precisely the sort of revolt feared by the British colonists in Jamaica’ (1990, p. 254). The destruction of life and property is portrayed in Rochester’s narrative of fiery and bloody weather conditions that he experienced in Jamaica. The description of the third floor comes along with the metaphor of the Rochesters, who represent the English ruling class. Bertha’s room, on the other hand, signifies the history of slavery and crimes committed by a violent race. Bertha Mason represents the colonial revolutionary who cleans away social oppression with fire.

As previously mentioned, the image of slavery is closely involved in colonialism. This particular image can be seen in Rochester’s narration to Jane: ‘Hiring a mistress is the next worse thing to buying a slave: both are often by nature, and always by position, inferior; and to live familiarly with inferiors is degrading’ (JE, p. 311). In this part, Rochester tells how he acquired a West Indian fortune by marrying a Jamaican wife, and subsequently lived in Jamaica for four years. Rochester knows what he is talking about when he discusses what it is like to buy and live with slaves. As a wealthy white man living in Jamaica before emancipation with his fortune the product of slave labor, he would undoubtedly have had slaves to wait upon him. When he compares his relationships with women to keeping slaves, he also draws a parallel to his own history as a slave master.

Rochester asserts the potentially oppressive power of his position which is emphasised in his begriming past. The imagery of slavery is also used by Bronte to represent Jane’s lesser power in the relationship with Rochester. She associates Rochester’s masculine power with not of a British but of an Eastern slave master dominating over Jane. At one point, the novel uses strong and shocking imagery of slavery describing the position of women enslaved in Eastern harems and sultans who reward their favourite slaves with jewels. Rochester compares himself to ‘the Grand Turk,’ declaring that he prefers his ‘one little English girl’ to the Turk’s ‘whole seraglio’ (p. 269), to which Jane responds with spirit:

‘I’ll not stand you an inch in the stead of a seraglio. … If you have a fancy for anything in that line, away with you, sir, to the bazaars of Stanboul without delay; and lay out in extensive slave-purchases some of that spare cash you seem so at a loss to spend satisfactorily here.’ (JE, p. 269).

The novel compares Rochester’s dominant position to a sultan, rather than to a white-skinned British slave master as he became sullied by the contact with the foreign Other that the British were afraid of. The oppressive aspect of the Other, non-British and non-white is marked in this case, rather than the history of British colonial oppression.

The domination of the coloniser and the oppression of the colonised can also be recognised between Edward Fairfax Rochester and Richard Mason. In Imperialism, Reform, and the Making of Englishness in Jane Eyre (2008), Sue Thomas explains that these phenomenons can be recognised in the character differences between English gentleman Rochester and plantocracy-class Creole Richard. She notes that ‘Rochester is positioned as manly, active, and adult in relation to the feminized and passive Richard’ (Thomas, 2008, p. 38). Rochester describes that Richard once had a ‘dog-like attachment’ (JE, p. 305) towards him which assumes that Rochester was the master of Richard. Jane sees it in the same way when she claims that ‘the contrast could not be much greater between a sleek gander and a fierce falcon: between a meek sheep and the rough-coated keen-eyed dog, its guardian’ (p. 190). Rochester shows bullying masculine force and despises Richard’s effeminate masculinity. Thomas explains that ‘Jane is sexually attracted to the imperial masculinity that Rochester embodies for her, yet repelled by his despotic tendencies, which Bront?? figures as the contaminating effect of Bertha’ (2008, p. 39).

CHAPTER III: The Image of the Other Woman in Wide Sargasso Sea

Jean Rhys’s postcolonial novel, Wide Sargasso Sea, serving mainly as a prequel of Jane Eyre, describes Antoinette Cosway’s life before and after her marriage to Edward Rochester, and later on her life as Bertha Mason. The novel is set in mid-19th century, and it tells the story of Antoinette Cosway, daughter of a white Creole plantation who have lost their wealth and status in society due to the Emancipation Act. The first part of the novel is narrated by Antoinette which is a reminiscence of her childhood at the Coulibri Estate in Jamaica. Rejected by her mother, Annette, she seeks solace in the nature surrounding her, thus she becomes alienated from the rest of the society. Part two of the novel is mostly narrated by Rochester, who has just married Antoinette in Dominica. They spend their honeymoon in Granbois, where Rochester starts to feel like an outsider, and begins to despise his wife and the Caribbean. Meanwhile, Antoinette begins to lose grip of her sanity and eventually becomes estranged from her own self. In the third part of the novel, Rochester has taken Antoinette to England. He locks up Antoinette in the attic of Thornfield Hall in the care of Grace Poole. In the end, as a consequence of this treatment, she has completely lost her identity as Antoinette Cosway and has transformed into Bertha Mason, the ‘madwoman in the attic’.

As it was previously mentioned, Wide Sargasso Sea belongs to postcolonial literatures, however, to define postcolonialism briefly is challenging because it is such a complex concept in itself. One of the reasons of its complexity is that versatile nations and cultures are categorised under the term. Connected to this, postcolonial literatures serve the base of the study of the effects of colonialism. Beyond their particular and distinctive regional characteristics, these literatures are common in the experience of colonisation from which they emerged in their present form out of. From the assumptions of the imperial centre, they emphasize their differences and assert themselves, thus making them distinctively postcolonial. In terms of postcolonial literary criticism, the Caribbean is a slightly specific area due to its unique history and hybrid nature. It is indicated in its population, as nearly the whole of the West Indies are not native and have immigrated there from somewhere else, either voluntarily or involuntarily. For the purpose of this thesis, I will only consider and include from the whole spectrum of postcolonial literary criticism the Caribbean postcolonial literature, since my analysis is primarily concerned with it.

There is a common tradition in postcolonial literature to ‘write back’ against the English canonical text with the re-telling of a story from a different point of view. Jean Rhys’s Wide Sargasso Sea is an example of this tradition. As Rhys’s novel can be considered partly as an adaptation of Jane Eyre, it is quite obvious that it could not have been written without its pretext.2 Nibras Jawad Kadhim in his research, Double Exile: Jean Rhys’s Wide Sargasso Sea (2011), points out an important notion that is connected to the ‘write back’ tradition presented in Wide Sargasso Sea. The novel serves as the voice of the silenced, also known as the Other. In narratives, the Others are characters from other ethnicities who become silenced and set up in opposition to the English ones. In Jane Eyre, the Other is represented in the form of Mr. Rochester’s mad wife, Bertha. Kadhim explains that the Others are ‘different and, therefore, unable to claim the English identity as their own’, and continues saying that they are also unable to ‘break from the complications of their ethnic background to create an independent self’ (2011, p. 589).

With the colonising of the West Indies and other Caribbean islands, the British also sought to express the distinction between the subordinated colonised and the homogenous Britons endowed with pure ‘Englishness’. The abolition of slavery enabled liberation among the population of the Caribbean islands, but on the other hand, it was resulted in the loss of their Englishness by colonial contamination. In the English narrative, the superior and desirable ‘Englishness’ finds a voice. Kadhim states that ‘the construction and protection of English identity becomes a major theme of many nineteenth-century English novels’ (2011, p. 590). The characters of the English authors are chosen on purpose to present and emphasize the distinction between the English normative and the colonial Other. In Jane Eyre, this is indicated in Bront’??s choice of Bertha, a West Indian Creole woman, as a contrary to Jane. While Jane’s character features English attributes such as healthiness, chastity and modesty, Bertha is portrayed as a mad, blatantly sexual, violent Creole who needs restraint. In this way, Bront?? can highlight the English superiority and ‘Englishness’ of Jane as opposed to the chosen Caribbean woman. In Wide Sargasso Sea, Jean Rhys shows resistance towards the ‘Englishness’ found in Jane Eyre by disapproving the distortion of the Creole character, which creates the concept of colonialism.

Rhys felt compelled to write her own vision of the story in the form of Wide Sargasso Sea. As Kadhim explains, she was ‘haunted by the figure of the first Mrs. Rochester whom one knows only by Rochester’s biased, racist and repulsively gendered descriptions of her’ (2001, p. 590). Bertha is defined as a monster by her English husband and she only gets to express herself by roaring, grunting and laughing manically in Jane Eyre. Rhys wanted to change Bertha’s unexplained and seemingly unwarranted rage and madness by giving her a voice that is used to be silenced. With the rewriting of Bertha’s story, Rhys tells the life of the West Indian Creole from another perspective. She takes the mysterious Other out of the hegemony of the English imperial narrative, and demonstrates how the Other, the different could abandon her marginal role and become essential and central. Rhys explained her impulse to rewrite Jane Eyre in an interview:

When I read Jane Eyre as a child, I thought, why should she [Bront??] think Creole women are lunatics and all that? What a shame to make Rochester’s first wife, Bertha, the awful mad woman, and I immediately thought I’d write the story as it might really have been. She seemed such a poor ghost. I thought I’d try to write her a life.1

Wide Sargasso Sea presents the problem of individuals who are trapped between two cultures, thus unable to identify fully with anyone. In Jane Eyre, because of her Creole origin, Bertha is silenced, mistreated and dehumanized, and presented as the madwoman in the attic. But with the rewriting of Jane Eyre, Rhys gives a story, life and voice to the marginalized Bertha. In this way, Rhys confronts the English colonising culture, and encourages the reader to view Bertha as the tortured victim and Rochester as a cruel coloniser. Bertha becomes a sympathetic character from this point of view, and she is no longer the raving mad woman whose illness runs in the family. Kadhim explains that with this rewriting Rhys ‘wants to relate the other side of the story as she believes Jane Eyre to be ‘only one side- the English side’ of the story’ (2011, p. 591).

In a chapter of A Breath of Fresh Eyre: Intertextual and Intermedial Reworkings of Jane Eyre (2007) by Margarete Rubik and Elke Mettinger-Schartmann, Wolfgang G. M??ller states in his essay, The Intertextual Status of Jean Rhys’s Wide Sargasso Sea: Dependence on a Victorian Classic and Independence as a Post-Colonial Novel (2007) that ‘One of the fascinating aesthetic paradoxes of Wide Sargasso Sea is that it is inseparably joined to Jane Eyre and yet as a work of art it is a completely original creation’ (2007, p. 66, original emphasis). It is derived from the Victorian classic, and it shares elements of the plot with the re-using of characters from the earlier novel in Rhys’s own version. This implies the recognition of the notion of intertextuality between the novels as well. Intertextual links are commonly found in postcolonial literature, as the newly independent nations strive to create and assert a culture of their own. In the case of Wide Sargasso Sea, one of the links are the description of a very different image of Rochester and the first Mrs. Rochester than that portrayed by Bront??. M??ller remarks that by giving a new title to her novel, Rhys ‘obviously wanted to forestall the impression of a simple identity of her protagonist with Charlotte Bront’??s’ (2007, p. 66). He also adds in connection with Rochester that ‘Nothing remains in the revisionary text of the Byronic romantic charisma with which Charlotte Bront?? had endowed him.’ (M??ller, 2007, p. 66). These particular intertextual links lead to the notion and presence of cultural identity represented in Wide Sargasso Sea in the portraying of Antoinette and Rochester that I will be discussing in part two and three of this chapter.

III.1. Postcolonial Cultural Identity and Hybridity

In postcolonial literatures, the concept of cultural identity can be considered a central theme, as it inherently includes the notion of belonging. The conception of cultural identity played a crucial role in all the postcolonial struggles. The colonised and coloniser’s identity is an important distinction in postcolonial cultural identity, as well as the hybrid forms in between these two categories, such as white Creole. Postcolonial literature often associates itself closely with the literature of the oppressed, therefore it is important to note that the literature of the oppressor can also be considered postcolonial equally. Naturally, a person’s cultural identity is greatly affected by whether they identify themselves with the oppressor or the oppressed. Although there can be seen binary opposites between coloniser and colonised, and reasonably between white and black, it cannot be stated that there is no variation between these categories, in addition to the categories of hybrid cross-over between these two opposites. It became one of the central concerns of postcolonial literary criticism to discuss these clear-cut categories, both the in-between position of hybrid forms and the actual terms as well. Relating to the subject, I will consider some of the central theory and terminology.

Stuart Hall, in Cultural Identity and Diaspora (1990) describes two different aspects of cultural identity. The first cultural identity defines an individual who lives among other individuals that share the same culture, history and ancestors. That is, it can be viewed as ‘being’, with a sense of unity and commonality. The second one is centered around the so called ‘becoming’, the constant transformation of their history, which means the process of identification that shows the formation of identity. From this second position of cultural identity, we can understand the critical character of the colonial experience (Hall, 1990, p. 223-25).

Bill Ashcroft explains the concept of representation in his book, Post-Colonial Transformation (2001), and draws a parallel between imagination, creation and cultural identity. He states that ‘Cultural identity does not exist outside representation’ (Ashcroft, 2001, p. 5). In this case, representation means that individuals can express themselves through their actions and constant statements about who they are. As Hall also emphasised in his work, one’s identity is also a product of one’s surroundings, even though one’s identity is highly personal and individual. In the postcolonial context, conflicting cultural setting can also challenge one’s identity, to which individuals respond in very distinct ways. Also, there are strategies by which cultural identity is represented. In the colonial context, this aspect of representation is about the position of the colonised by the coloniser. However, the colonised struggle to authorize themselves, the coloniser marks them as subordinate, inferior and marginal. Thus postcolonialism and postcolonial literature can be seen as a struggle for power between the coloniser and colonised. In this power struggle, the notion of division between self and other appears as otherness or alterity3, which has been one of the central concerns of postcolonial literary criticism.

Due to the hybrid nature of the Caribbean region, postcolonial cultural identity considers the notions of place and sense of place particularly important. Silvio Torres-Saillant explains in his book, An Intellectual History of the Caribbean (2006), that the native inhabitants, the Caribs, were made to work, and the severe labour quickly decreased the native population when European settlers first arrived in the Caribbean. Later, the consequent labour shortage was altered by the importation of African slaves to work in the emerging plantations on the islands (Torres-Saillant, 2006, p. 16). This resulted in a significant impact on the cultural identities of the region’s population, which created the cultural process of the Creole society, Creolisation. In The Post-Colonial Studies Reader (1995) edited by Bill Ashcroft, Gareth Griffiths and Helen Tiffin, extracts from various theorists can be found, including from The Development of Creole Society in Jamaica 1770’1820 (1971) by Edward Kamau Brathwaite. Brathwaite describes creolisation as ‘a way of seeing the society, not in terms of white and black, master and slave, in separate nuclear units, but as contributory parts of a whole’ (Ashcroft et al., 1995, p. 203). In this way, the descendants of English settlers in the Caribbean, or white Creoles, included parts of both the English culture and the indigenous culture of the colonised in their cultural identity. That is why Creole population can be positioned between two cultures, creating a unique cultural identity separate from both.

Postcolonial literary criticism has adopted the term hybridity to describe the recreation of cultural identity. Hall explains that, especially in the Caribbean, identities are ‘constantly producing and reproducing themselves anew, through transformation and difference’ (1998, p. 235). Hall puts it as a ‘a matter of ‘becoming’ as well as of ‘being”, thus the concept of hybrid identity is not a fixed one but goes through transformation in constant motion (1998, p. 225). In this sense, a hybrid identity cannot be defined through one’s history alone. In relation to the surrounding environment, matters of similarity and difference must be taken into account.

III.2. Creole Cultural Identity and In-Betweenness

Just as Rhys never feels fully belonged but displaced in the world, this also appears in her unsure place in literature. Critics discussed her controversial literary identity, whether she is an English or a West Indian writer and whether her work should be considered Caribbean. The double marginalisation of her identity derives from the fact that she is neither considered alongside Caribbean writers, nor is she considered among European women writers.

In Wide Sargasso Sea, Rhys challenges cultural definitions and gives a particular emphasis to double culture as a source of deep anxiety. This emphasis comes from the fact that she is of European descent, yet born and brought up in the Caribbean, therefore her world was also shaped by the ambiguity of being an insider and outsider in both of the metropolis, England, and the colony, the West Indies. This feeling is represented in her novel in the way of Antoinette’s dislocation as a white Creole. The cultural identity of a white Creole is a complex one as they can also belong to the ranks of the coloniser. However, Antoinette has integrated parts of the black Caribbean cultures into her cultural identity as well, since she grew up in the Caribbean among the predominantly black population and has not visited the colonial centre. The landscape of her home island has also become an integral part of her identity, and being removed from that landscape causes her great discomfort. The strength of this effect is shown as Antoinette feels like a part of her is missing when she is not in the Caribbean.

Naturally, there is a difference between the white Creole experience from that of the black Caribbean. Kadhim takes Rhys’s opinion about the attitude towards the Creoles as the basis of his discussion about the social position of the Creoles. He states that they are ‘in Rhys’s opinion, misunderstood and maligned both by the blacks of the Caribbean islands and by the wealthier white Europeans who come to settle in the West Indies after slavery is abolished’ (Kadhim, 2001, p. 591). Kadhim also explains the severe conflict between the white and the black populations of the West Indies. He notes that although the Creoles were educated to consider England as home, ‘they were also culturally marked and excluded as inferior colonials’ (Kadhim, 2001, p. 591). They were also racially privileged in relation to the subaltern Africans at the same time.

In-betweenness, which is a state of alienation or loss of identity, can be caused by hybridity, as the process of hybridity causes the individual to become an outsider in both cultures. Overcoming the effects of in-betweenness by one’s cultural identity can be an extremely difficult process for an individual. Being rejected by a community that has become fused into one’s identity can have a devastating effect on their cultural identity since belonging to a society is an integral part of who we are. A clear example of in-betweenness is the heroine of Wide Sargasso Sea, Antoinette Cosway, who is born in the middle of the conflict between the white and black populations of the West Indies. She is the daughter of a white Creole woman and a former slave-owner of English descent in Jamaica, which only fuels the hostility of the islanders. After the liberation of the black slaves by the Emancipation Act in 18335, the family sank into poverty, and while carrying the stigma of slavery, they are still viewed as a family of colonisers. In this way, as an impowerished white Creole from a slave-owning family, she is rejected and left alienated in between the island’s black and white populations. On the basis of her mother’s Creole nationality, she is excluded from the fortune-seeking English community. The black community does not accept her because she is white, neither she fits into the world of the whites because they consider those of mixed races as inferior to themselves. Therefore it can be concluded that she does not belong anywhere.

At the beginning of the novel, the problematic nature of the white Creole is emphasised in Antoinette’s explanation of her lonely, isolated existence: ‘They say when trouble comes, close ranks. And so the white people did. But we were not in their ranks’ (WSS, p. 15). Antoinette becomes a double outsider as a white Creole, she is considered as ‘white nxxxxx’ for the Europeans and ‘white cockroach’ for the blacks as she explains to Rochester in the novel:

‘ a white cockroach. That’s me. That’s what they [the blacks] call all of us who were here before their own people in Africa sold them to the slave traders. And I’ve heard English women call us white nxxxxxs. So between you I often wonder who I am and where is my country and where do I belong and why was I ever born at all (WSS, p. 93).

These two phrases contain the paradox of the double-exiled Creole. Antoinette is forced to see herself as Other because she is scorned by both the black and white cultures, although she is able to move between both of them. She is doubly exiled and has no place to truly belong because she is not English enough for England, nor Caribbean enough for the Caribbean. Therefore the white Creole cannot be considered to be a member of either culture anymore. The sense of belonging is absent in her case, although on the other hand, in-betweenness is present which drives Antoinette into madness.

III.3. The Effect of the Other on the Identity of the Coloniser

The coloniser becomes transformed into a hybrid when being influenced by the local culture. Sander L. Gilman notes that the formulations of stereotypes for individuals and groups and attributes construct the repressed mental representations as people’s response. Since difference helps to draw an imaginary line between the Self and the Other, it is central to one’s identity. However, it can also threaten order and control while this difference distinguishes the Self from the Other. When the sense of order and control between the Self and the Other alters, it results in the disintegration of one’s identity the individual has created and internalized. ‘Stereotypes arise when self-integration is threatened,’ as Gilman claims: ‘We project that anxiety onto the Other, externalizing our loss of control. The Other is thus stereotyped, labeled with a set of signs paralleling (or mirroring) our loss of control’ (Gilman, 1985, p. 18, 20). The ‘good’ Other is ‘which we fear we cannot achieve,’ as opposed to the ‘bad’ Other that assumes the negative stereotype is the one that ‘we fear to become’ (Gilman, 1985, p. 20). The ‘bad’ Other is the antithesis of the self, by which the self is defined, loaded with our anxiety of the loss of self-integration. Attributes assigned to the Other by individuals or groups are based on not only ‘models from the social world’, they are also derived from specific historical contexts and they are perpetuated by a culture: ‘Every social group has a set vocabulary of images for this externalized Other’ (Gilman, 1985, p. 20).

In Wide Sargasso Sea, the cultural difference is clearly visible between the Creole and the coloniser in the characters of Antoinette and Rochester. As Rochester’s English self encounters the racial Other in the West Indies, his narrative reveals himself as the alien Other in the foreign land with a series of shocks and discomfort caused by nervous confrontations between the Self and the Other. Rochester finds himself an object of strangeness and fear when he arrives at a village with a descriptive name, Massacre. He experiences these feelings when the first little boy he smiles at cries at the very sight of him, and recognises that women outside ‘looking at us but without smiling’ (WSS, p. 60-62). Rochester is constantly annoyed by being watched by the Creoles, as he is displaced in a distant, lonely place. Sometimes his disturbance simply comes from the unknown anxiety of being watched: ‘I woke next morning in the green-yellow light, feeling uneasy as though someone were watching me’ (p. 76). He confesses to Antoinette that he feels ‘very much a stranger’ there and the place is not on his side but on Antoinette’s (p. 117). However, as the people of the place see Rochester as a white colonist or simply a stranger, they are also subject to the gaze of Rochester as well. His gaze is the one that differentiates himself from the tropical world and its inhabitants, and as he perceives differences in the foreign land of the West Indies with the start of his passage, he draws and imaginary line demarcated by difference. Although Rochester is not fully recovered from fever, he refuses to take shelter from the rain in Caroline’s house at Massacre, due to his knowledge of or bias against racial otherness. Moreover, he expresses his antagonism toward not only the islands but also his newlywed before reaching Granbois. When she notices Antoinette for the first time, he criticizes her as the racial Other, dissociated from his English self: ‘Long, sad, dark alien eyes. Creole of pure English descent she may be, but they are not English or European either’ (p. 61). In the world of difference as an alien, Rochester finds himself displaced and helplessly situated in the heart of the hostile tropics, ‘That green menace’ (p. 135).

Rochester’s loose contact with the motherland England poses a threat to his self-integration, and together with other factors worsens by his lingering uneasiness about the green world of the West Indies. As a second son, Rochester has received nothing from his own family, thus as a penniless son, he manages to acquire all of Antoinette’s wealth in a land far away from England. Only by writing to his father and by establishing connections with the Englishmen in the tropical land like Richard Mason and Mr. Fraser, his fragile tie with the motherland is sustained. When he arrives in Jamaica, he also suffers from the attack of fever that makes him disabled and disoriented: ‘I have had fever. I am not myself yet’ (p. 61). All of these form the roots of Rochester’s anxiety in a land, foreign and unknown to him. His Self becomes vulnerable when he finds himself endangered by a world of difference, populated by people he does not know and dominated by languages he does not understand: ‘everything round me was hostile’ and ‘whatever they were singing or saying was dangerous. I must protect myself’ (p. 135). The only solution for Rochester is the retreat to England, and he has to maintain his sense of Englishness, preventing it from any containment or disintegration before he makes it possible. He projects his anxiety, his fear of the loss of control onto the Other and evokes in his mind negative representations of the Other in so doing. Coming back to Gilman’s explanation, the white colonist Rochester selects models from the history of European medicine that categorizes the Other as the diseased while creating images he fears to be.

CONCLUSION

In this thesis I have analysed women’s position and status in the Victorian era, and I also examined its manifestations in Jane Eyre in the form of a postcolonial approach. I have mainly concentrated my analysis on the novel’s female protagonist, who behaves as a coloniser, and another female character, who represents the colonised. I have, however, also included relevant points of discussion on other ‘coloniser’ characters, such as the male protagonist as well as another female character.

In the 19th century, there were certain spheres of life. Men belonged to the public sphere of life because they were independent and possessed an active and decisive attitude, while women belonged to the private sphere because they were dependent and possessed qualities of femininity. Thus men were superior and women were inferior.

By examining the historical background of society and particularly the governess, I have also examined it from a postcolonial perspective with examples from Jane Eyre. The relationship between men and women can be seen as the relationship between the coloniser and the colonised. In the novel, Edward Fairfax Rochester represents the dominant male coloniser, while her insane wife, Bertha Mason represents the inferior female colonised. These traits can be found throughout the novel in their manner, civility, status and origin, which are confirmed by stereotypes as well. Jane, on the other hand, possesses an inferior status as a woman and a governess but as a coloniser she becomes superior towards the colonised. Her whiteness and qualities signify that she is portrayed as the proper English woman, not like Blanche Ingram. As she possesses higher position in society, she may seem to be the ideal white woman by her appearance. However, Lady Ingram’s external features suggest that she is of another origin and still acts like an imperious coloniser.

I have concluded that male domination and colonial domination share the same analogy which is confirmed by the examples of certain relationships. The relationship between Rochester and Bertha is represented as the English ruling class above the savage people and so is the same in the case of Rochester and Richard. Jane is attracted by Rochester’s dominant behaviour, however she degrades herself to a slave in different situations making herself feel sympathy towards the colonised ones.

NOTES

1 Harrison, R. N. (1988). Jean Rhys and the Novel as Women’s Text, 128. Chapel Hill: University of North Carolina Press. Retrieved from https://books.google.hu/books?id=6nLcmpn89JkC&lpg=PP1&hl=hu&pg=PA128#v=onepage&q&f=false

2 To define adaptation, it is essential to observe what literary theory and criticism indicates with this word usually, and in this case what to narrow the term for. It is called so as all kinds of versions and changes of a specific text, for example a rewriting in the same genre of a given text. Every adaptation (let it be of whatever nature) requires one (or several) ‘predecessor’, pretext or source text to be derived from. It is a conscious activity of an author during which a sovereign, ‘new’ work is created from the source text, that wants to distance itself from its origin in a paradoxical way.

3 Alterity is derived from the Latin alteritas, meaning ‘the state of being other or different; diversity, otherness’. The term was adopted by philosophers as an alternative to ‘otherness’ to register a change in the Western perceptions of the relationship between consciousness and the world. In post-colonial theory, the term has often been used interchangeably with otherness and difference. However, the distinction that initially held between otherness and alterity ‘ that between otherness as a philosophic problem and otherness as a feature of a material and discursive location ‘ is peculiarly applicable to post-colonial discourse. The self-identity of the colonizing subject, indeed the identity of imperial culture, is inextricable from the alterity of colonized others, an alterity determined, according to Spivak, by a process of othering. The possibility for potential dialogue between racial and cultural others has also remained an important aspect of the use of the word, which distinguishes it from its synonyms. Ashcroft B., Griffiths G., Tiffin H. (1998). Key Concepts in Post-Colonial Studies, 11-12. (2nd ed.). London: Routledge.

4 The political struggle of colonized peoples against the specific ideology and practice of colonialism. Anti-colonialism signifies the point at which the various forms of opposition become articulated as a resistance to the operations of colonialism in political, economic and cultural institutions. It emphasizes the need to reject colonial power and restore local control. Ashcroft B., Griffiths G., Tiffin H. (1998). Key Concepts in Post-Colonial Studies, 14. (2nd ed.). London: Routledge.

5 In 1833, the Parliament abolished slavery throughout the British Empire. Johnson, R. (2003). British Imperialism, 16. New York: Palgrave Macmillan.

Caribbean/West Indian, centre/margin (periphery), class and post-colonialism, colonial discourse, colonialism, counter-discourse, creole, creolization, diaspora, discourse, dislocation, feminism and post-colonialism, hegemony, hybridity, marginality, metropolis/metropolitan, native, Other/other, othering, post-colonialism/postcolonialism, primitivism, savage/civilized, slave/slavery, subaltern, subject/subjectivity.

REFERENCES

Primary Sources

Bront??, C. [1847] (2008). Jane Eyre. New York: Oxford University Press.

Rhys, J. [1966] (1992). Wide Sargasso Sea. New York: W. W. Norton & Company.

Secondary Sources

Abrams, L. (2001). Ideals of Womanhood in Victorian Britain, 1-6. Retrieved from

http://www.bbc.co.uk/history/trail/victorian_britain/women_home/ideals_womanhood_01.shtml

Allingham, V. P. (2000). The Figure of the Governess, based on Ronald Pearsall’s Night’s Black Angels. Retrieved from http://www.victorianweb.org/gender/pva50.html

Anderson, Z. J. (2004). Angry Angels: Repression, Containment, and Deviance, in Charlotte Bront’??s Jane Eyre. Retrieved from http://www.victorianweb.org/authors/bronte/cbronte/anderson1.html

Ashcroft, B. (2001). Post-Colonial Transformation, 5. London: Routledge. Retrieved from http://samples.sainsburysebooks.co.uk/9781134556960_sample_515857.pdf

Burnett, J. (2002). Victorian Working Women: Sweated Labor. Retrieved from http://www.victorianweb.org/history/work/burnett2.html

Hall, S. (1990). Cultural Identity and Diaspora, 223-235. Retrieved from http://www.rlwclarke.net/Theory/SourcesPrimary/HallCulturalIdentityandDiaspora.pdf

Ionoaia, E. (2008). The Creolization of the Self ‘ From Jane Eyre to Wide Sargasso Sea. University of Bucharest Review, 10(2), 137. Retrieved from http://ubr.rev.unibuc.ro/wp-content/uploads/2010/10/elianaionoaia.pdf

Jacobsen, K. & Snodgrass, E. M. (n. d.). A Postcolonial Approach to the Novel. Retrieved from http://www.cliffsnotes.com/literature/j/jane-eyre/critical- essays/a-postcolonial-approach-to-the-novel

Kadhim, J. N. (2011). Double Exile: Jean Rhys’s Wide Sargasso Sea. Journal of The College of Education for Women, 22(3), 589-591. Retrieved from http://www.iasj.net/iasj?func=fulltext&aId=2052

Lecaros, W. C. (2005). The Victorian Governess Novel: Characteristics of the Genre. Retrieved from http://www.victorianweb.org/genre/wadso1.html

McLeod, J. (2000). Beginning Postcolonialism, 173-177. Manchester: Manchester University Press.

Meyer, L. S. (1990). Colonialism and the Figurative Strategy of ‘Jane Eyre’ from Victorian Studies, 33(2), 247-268. Bloomington: Indiana University Press. Retrieved from http://www.jstor.org/stable/3828358

Reagin, N. (n. d.). Women as ‘the Sex’ During the Victorian Era. Retrieved from http://webpage.pace.edu/nreagin/tempmotherhood/fall2003/3/HisPage.html

Thomas, S. (2008). Imperialism, Reform, and the Making of Englishness in Jane Eyre, 31-54. New York: Palgrave Macmillan.

Torres-Saillant, S. (2006). An Intellectual History of the Caribbean, 16. New York: Palgrave Macmillan.

Wojtczak, H. (n. d.). Women’s Status in Mid 19th-Century England. Retrieved from http://www.hastingspress.co.uk/history/overview.html

The EU Connected Continent

The proposal by the European Commission for an EU single market for electronic communications and to achieve a Connected Continent: How does the proposal seek to uphold (or not) the principle of harmonisation?

1. INTRODUCTION

1. The Commission of the European Union launched on 11 September 2013 the proposal to establish a ‘Connected Continent’ with a single market for electronic communication, a single authorisation, spectrum management, high quality of cross border consumer protection, guarantees for net neutrality and end user protection. This papers aims to describe how the proposal seek to uphold (or not) the principle of harmonisation in some of the proposals in the regulation.

2. On 11 September 2013, Madame Nellie Kroes, the former Vice President of the European Commission responsible for the Digital Agenda for Europe presented a legislative package for ‘Connected Continent: Building a Telecoms Single Market’ calling it the most ambitious plan in 26 years of telecoms market reform . The package aims at removing obstacles to a real single market in the European Union for electronic communications and to support the telecommunication sector to invest in new technologies and services. Furthermore, the Commission wants to reduce the administrative burdens in the authorisation processes, to coordinate the assignment of the radio-spectrum at the level of the European Union and to increase the network capacity. The proposal also wants to eliminate the extra costs on international calls and the roaming costs. The proposal received some critics because of the perceived lack of consultation and the hasty attempt to have it adopted in the previous legislature.

3. The proposal is based on article 114 of the Treaty on the Functioning of the European Union (TFEU) which allows for the adoption of ‘for the approximation of the provisions laid down by law, regulation or administrative action in Member States which have as their object the establishment and functioning of the internal market’. In its Impact Assessment of the proposal, the Commission refers several times to the increased harmonisation in the area of spectrum management, the procedures for authorising operators in Member States, harmonisation of numbering resources, of consumer rules and standardised EU access products. The regulatory principle of harmonisation beside the other regulatory principles as described in article 8 of the ‘Better Regulation Directive’ will be used as guidance to analyse the package.

2. The EU Connected Continent.

2.1 Background of the proposal.

4. Telecommunications was traditionally a national regulated market with state-owned enterprises which held a complete control over equipment and services provided. The liberalisation initiated at the end of the 1980s ran largely similar with the rise of the internet. The regulatory framework of 2002 intended to adapt all the regulations to the market and technological changes. The Commission also wished to coordinate the existing liberalisation and harmonisation legislation.

The rise of mobile internet since 2007 and the increasing use of the internet both in number of users as in the size of data traffic have led the Commission to review the convergence of telecom and internet in 2010.

5. The EU Regulatory Framework for electronic communications networks and services (the Regulatory Framework) is the basis for all national telecommunications laws in the EU Member States. The Regulatory Framework provides general and technology neutral rules applying to all electronic communications networks and services covering fixed and wireless telecoms, data transmission and broadcasting transmission. It contains also provisions for the structure and functioning of national telecommunications providers. It also sets out the framework for both general rules applying to all providers of electronic communications networks and services and particular rules which may only be imposed by national regulatory authorities (NRA) on operators with significant market power (SMP).

One of the main objectives of the Regulatory Framework was to align the sectorial regulation of the electronic communications market with general competition principles.

The Regulatory Framework consists principally of four key directives:

‘ a Directive on a common regulatory framework for electronic communication networks and services (the Framework Directive);

‘ a Directive on the authorisation of electronic communications networks and services (the Authorisation Directive);

‘ a Directive on access to, and interconnection of, electronic communications networks and associated facilities (the Access Directive); and

‘ a Directive on universal service and users’ rights (the Universal Service Directive).

6. This framework is supplemented by the Commission’s Radio Spectrum Decision, the Privacy and Data Protection Directive, and the Commission Directive on Competition in the market for electronic communications networks and services. These important documents are accompanied by various supporting Regulations, Decisions and Recommendations of the Commission.

The general structure of the Regulatory Framework entered into force in July 2003. Following a lengthy political process which started in 2006, on 19 December 2009, two Directives entered into force which update and amend the Regulatory Framework (the “2009 Revisions”):

‘ the Better Regulation Directive amends the Framework, Authorisation and Access Directive; and

‘ the Citizens’ Rights Directive amends the Universal Service and the Privacy and Data Protection Directive.

The new Regulatory Framework also established a new pan-European regulatory institution, the Body of European Regulators for Electronic Communications (BERE”).

2.2. Overview of the EU Connected Continent proposal.

2.2.1. Aim and content of the proposal.

7. AIM OF THE PROPOSAL. The general aim of the proposal is to enable the completion of a European Single Market for electronic communications by removing the identified obstacles to the Single Market and in which citizens and businesses can access electronic communications wherever they are provided in the Union, without cross-border restrictions or unjustified additional costs and companies providing electronic communications networks and services can operate and provide them wherever they are established or their customers are situated in the EU.

8. OBJECTIVES OF THE PROPOSAL. The proposal seeks to reduce administrative burdens related to gaining authorisation to operate, coordinate radio-spectrum assignment at EU level, and increase network capacity. It will also lead to the elimination of premiums on international call and on incoming calls when roaming.

The package includes proposals for the reform of the market’s regulatory framework, addressing cross- border issues and introducing a range of new rights for users and service providers. The reforms aim to:

‘ create a single EU authorisation system, requiring operators to notify the national regulatory authority (‘NRA’) in the country in which they are established rather than the NRAs of each Member State in which they operate;

‘ preserve net neutrality through non-discrimination, by preventing blocking and throttling of content and services by service provider;

‘ introduce new rights for consumers including requiring additional information on service specifications to be included in contracts, notice one month in advance of contract rollover with an option to oppose, and the right to terminate any contract after 6 months without penalty;

‘ end roaming charges which means that operators lose the right to charge roaming fees for incoming calls whilst a customer is travelling abroad in the EU;

‘ co-ordinate spectrum by way of a set of principles and criteria to ensure the development of an EU wireless space.

9. STRUCTURE OF THE PROPOSAL. The structure of the proposal is as follows:

Chapter I General provisions

Article 1 and 2: General provisions.

The articles 1 (Objective and scope) and 2 (Definitions) contain the general provisions, including relevant definitions. It establishes regulatory principles pursuant to which the regulatory bodies involved shall act when applying this regulation in conjunction with the provisions of the existing framework.

Chapter II Single EU authorisation

Article 3 to 7: Single EU authorisation

In the considerations 9 to 16, the Commission refers to the difficulties of provision of cross-border electronic communications especially when providers want to offer services in several countries, they still need to notify and pay fees in individual host Member States. The Commission wants to end this situation.

Chapter III European inputs

Section 1 – Coordination of use of radio spectrum within the single market

Article 8 to 16: Spectrum management

The second part of the proposal of Madame Kroes is related to the radio spectrum management as this is today an affair of national authorities as the radio spectrum used for high-speed wireless broadband is allocated at national level. The Commission consider that national level problems (mainly technical and bureaucratic delays) have led to procedural and licensing delays in spectrum allocation.

Even in those areas where harmonisation on this matter has started, the Commission esteems this not to be efficient. In consideration 17 of the proposal, the Commission refers to ‘the piecemeal process of authorising and making available the 800 MHz band for wireless broadband communications, with over half of the Member States seeking a derogation or otherwise failing to do so by the deadline laid down in the Radio Spectrum Policy Programme (RSPP) Decision 243/2012 of the European Parliament and the Council.’ . The Commission considers the radio spectrum as a public good and an essential condition for the creation of the internal market for mobile broadband in the Union which contributes to the implementation of the Digital Agenda for Europe.

Section 2 ‘ European virtual access products

Article 17 to 20: European virtual access

In article 17 up to and including article 20 of the proposal of the regulation, a proposal for a European Virtual broadband access product is drafted that enables that services with a high quality and equal functionalities can be offered in the whole European Union. The Commission refers with this to an EU-harmonised virtual broadband access products (virtual unbundling, IP bit-stream and terminating segments of leased lines).

Chapter IV Harmonised rights of end-users

Article 21 to 29: Rights of end-users

With the articles in chapter IV rules are defined to cope with the situation in the EU where electronic communications providers and end-users are confronted with ‘inconsistent rules regarding rights of end-users, leading to uneven levels of protection and a variety of diverging rules to comply with in different Member States’. Because of this defragmentation of rules, operators are losing profits and consumers lack protection. Furthermore, the provision of services across borders is hindered and consumers are afraid to use such services. The Commission wants to enhance the level of consumer protection across the EU by harmonising the rules defining the rights of end-users.

Chapter V Facilitating change of providers

Article 30: Facilitating change of provider

According to the Commission, improving the rules which allow consumer to switch from providers will promote ‘market entry and competition between electronic communication providers and allow end-users to choose more easily the provider which best meets their specific needs’. As described in the considerations 62 to 66, harmonising principles for switching operator procedures will allow end-users to switch providers when it is in their interests. End-users should be able to switch without being hindered by legal, technical or procedural obstacles.

Chapter VI Organisational and final provisions

Article 31 to 40: Organisational and final provisions

This Chapter contains first general provisions concerning sanctioning powers of the competent national authorities and rules on the Commission’s power to adopt delegated or implementing acts.

10. LEGAL BASE OF THE PROPOSAL. The proposal is issued on the basis of Article 114 of the Treaty on the Functioning of the European Union (hereinafter: TFEU) and therefore serves to improve the functioning of the internal market as the proposal stipulates: The proposal is based on Article 114 of the Treaty on the Functioning of the European Union, as it relates to the internal market for electronic communications and its functioning.

11. POSITIONS OF STAKEHOLDERS TOWARDS THE PROPOSAL. Some Member States appear to have reservations about the proposal’s complexity as well as about the fact that it had not been preceded by the usual formal consultation process.

The European Parliament has introduced 820 amendments to the proposed regulation. This reflects not only on the complexity of the proposal, but also indicates that there is a lot of doubt on sustainability. The European Parliament voted though the package on 3 April 2014 electing to amend certain of the Commission’s proposals and adopt the following position in the new proposed Regulation (although the text may still be amended by the Council). In addition the European Commission is to perform a full evaluation of the entire regulatory framework for electronic communications by 30 June 2016.

In the second semester of 2014 the Commission established its position on amendments of the European Parliament on first reading and some discussions were held within the Council or its preparatory bodies on the proposal.

The Committee on Industry, Research and Energy of the European Parliament in its Draft Report comes to the conclusion that some of the proposed measures should be subject to a deeper, structured public consultation and thorough ex-ante assessment of the expected impact and consequently be included in the next review of the framework for electronic communications on the proposal.

The Body of European Regulators for Electronic Communications (BEREC) is concerned about the shift of power from domestic regulators to the Commission, and warns that the single authorisation process may be operationally more costly and burdensome than the current system. It also argues that coordinating spectrum bidding gives larger operators an advantage given the capital and resources needed.

Telecommunications lobby groups as the European Telecommunications Network Operators’ Association (ETNO) are more positive on the proposal and welcomed the harmonisation of spectrum auctions and releases and the Recommendation on costing, but argues that allowing more market restructuring (mergers) and changing to a fully harmonised and lighter pan-EU framework are missing elements.

Other stakeholders claim that the proposal was rushed and argues that it lacks incentives for investment and innovation and does not address the necessary consolidation of mobile markets and does not reduce the overall regulatory burden. Large telecoms operators say that roaming and price caps would deprive the providers of the incomes needed to modernise networks. Some analysts argue that spectrum coordination would require leaders in spectrum release to align themselves with laggards and as such may be counter-productive and slow growth and doubt whether the proposal will fully achieve its aims. Other critise the proposal for not being bold enough since it does not create a single EU regulator or EU-level allocation of spectrum.

2.2.2. Overview of the principle of harmonization as provided for under the EU Connected Continent proposal.

2.2.2.1 The six governance principles.

12. In the document COM (1999) 539, Communication from the Commission of 10 November 1999, The 1999 Communications Review – Towards a new framework for Electronic Communications Infrastructure and associated services , principles for regulatory action have been defined which underpin the proposed regulatory framework. These regulatory principles, which are generally accepted as good governance principles inherent to EU policies are to govern regulatory action at the European and national level.

The principles are:

‘ Regulation should be based on clearly defined policy objectives.

‘ Regulation should be kept to a minimum to meet the defined objectives which is another formulation of the principle of proportionality.

‘ Regulation should enhance legal certainty and be consistent over time to allow all concerned to make investment decisions with confidence.

‘ Regulation should be technologically neutral or objective and non-discriminatory so all equivalent technologies or services should be treated in the same way.

‘ Regulation should be enforced as closely as practicable to the activities being regulated which is the principle of subsidiarity where the European level should take action if the objectives of the proposed action cannot be sufficiently achieved by member states and can therefore, due to the scale or effects of the proposed action, be better achieved by the Community.

de Destree mentions further following principles:

‘ Regulation should be flexible to be able to respond to rapid market developments. On a European level, EU law should provide only for general objectives and minimal procedure requirements to allow differentiation across member states. On the other hand, on a national level, broad power and a margin of discretion should be left to regulatory players.

‘ Regulation should be transparent. This means that legislation and the number of legal instruments should be kept at the minimum and that all regulators should widely consult the participants and make their information and decisions easily accessible.

‘ European regulatory bodies should share a common regulatory culture to ensure the establishment of a single market for electronic communication services.

2.2.2.2. Harmonisation.

13. DEFINITION. Article 114.1 TFEU serves as legal basis for adopting the measures for the approximation of the provisions laid down by law, regulation or administrative action in Member States which have as their object the establishment and functioning of the internal market.

The objective of article 114 TFEU (ex Article 95 TEC) is to harmonise national laws and to ensure that the same rules are applicable throughout the whole EU preventing discrimination against equivalent goods from any Member State. Such an application of common rules throughout the whole EU is designed to contribute to the establishment and good functioning of the internal market (which is explicitly required by Article 114.1 TFEU). This also implies that, once the deadline for transposition expires, Member States are in principle not allowed to maintain any national legislation or any other measure which is inconsistent with the Directive.

Harmonisation is nearly always mentioned together with the principle of subsidiary. The principle of subsidiary and harmonisation imply a more harmonised regulatory culture but not necessarily for all market segments. The principle of subsidiary and harmonisation means that the optimal level of governance needs to be found for each aspect of regulation.

In its most common legal definition harmonisation of legislation means to approximate national laws, regulations and administrative provisions when differences between these national rules are considered to be part of the causes of trade barriers.

Harmonisation is not synonymous with unification or standardization. Unification means that a legislative measure is replacing the existing legal systems and replacing them with a single system. This leads to a uniform legislation in all the Member States which have to adopt this measure. Harmonisation leads also to a uniform system but gives Member States the opportunity (and duty) to a legislative measure to put in their own right. The purpose of harmonisation is therefore to create equal rights and obligations, but to let the Member States are free to do so in a way that fits them best.

14. WHY HARMONISATION? The Commission wants to establish a single market for electronic communications As Europe is fragmented into 28 separate national communications markets and EU regulations. EU rules on, for example, authorisations, regulatory conditions, spectrum assignment and consumer protection are implemented in diverging ways. Harmonisation seems indispensable to reach the objectives of the Commission. In order to create an internal market, different obstacles such as cultural differences, transportation difficulties, different languages and cultures and local customs should be overcome.

Harmonisation only addresses just one of the obstacles to the realization of the internal market, namely the differences in legislation of the different Member States. The Commission assumes that by removing legislative differences a major part of the obstacles to an internal market will be taken away. Harmonisation has a dual purpose. First of all, it has the aim to eliminate the existing differences between national laws so as to come to the second objective, namely to ensure that competition is not distorted.

15. POSITIVE HARMONIZATION VERSUS NEGATIVE HARMONIZATION. The form of harmonization by which a common standard is introduced throughout the Community is referred as ‘positive’ harmonization because new standards are introduced. The removal of existing barriers by striking down of national laws and regulations is known as ‘negative’ harmonization. When using ‘positive hamonisation’, the Commission can make a choice between varying degrees of harmonisation to reach the desired level.

16. MAXIMUM AND MINIMUM HARMONISATION. Through positive harmonisation the European legislator can bring closer together national legislations. There are several methods possible as there are multiple different degrees of harmonisation than just the maximum and the minimum. Maximum harmonisation has many synonyms, including ‘full’, ‘total’ or ‘exhaustive’ harmonisation. According to some legal doctrine the term ‘total’ harmonisation is to avoid since it is impossible to harmonise everything. The term ‘exhaustive’ harmonisation is mainly used by the Court of Justice.

17. Maximum harmonisation leaves the Member states with no scope for further independent action in the field covered by the harmonising directive. As far as the area is covered by a maximum harmonisation directive is concerned, member states must ensure that their national system provides exactly what is required by that directive. It is not possible to introduce a stricter standard. With a minimum harmonisation, the Community will set down a minimum standard with which all the Member States must comply. Beyond this minimum level, member states are free to set their own standards, subject to the requirements of the TFEU. This means that a general minimum standard is imposed where the parties may go under, but not above. In other words, when the European Union within a certain scope imposes a certain level of protection, Member States may offer neither more protection nor less. An advantage of minimum harmonisation is the fact that this preserves the individuality of each Member State. A disadvantage of this form of harmonisation would be the so-called ‘race to the bottom’ as minimum harmonisation can create the effect that Member States must adjust their legislation with a higher level of protection to a less stringent level. This is for example the case the case in consumer law when minimum harmonisation leads to a lower level of protection.

18. Maximum harmonisation doesn’t mean that the highest possible form of protection is imposed from above but obliges the Member States to transpose a directive which aims to impose this degree of harmonisation into their national law. The Member States are not allowed to differ from the directive by offering more or less protection. The concept of full harmonisation doesn’t mean neither that every aspect of the matter must be harmonised.

19. With the start of the European Union, the idea was to create a common basis of rights and the Member States were obliged to further deepen and complete these standards and the mechanism of minimum harmonization was used to this intent. From the point of view of the Member States, minimum harmonisation was seen as the tool of choice: there was not an overly large intervention in the national legislation and the Member States could themselves determine and impose more stringent measures. Many Member States therefore made use of by, for example, to allow a longer period than the minimum periods in the directives for consumer protection in the domains of distance selling, door-to-door sales and time-sharing.

3. What does the proposal bring towards more harmonisation. A critical assessment of the principle of harmonisation in the proposal.

3.1. Harmonisation in the domain of single authorisation.

20. SINGLE AUTHORIZATION. The first topic that is dealt with in the proposal for the regulation is the ‘single authorisation’. In the considerations 9 to 16 of the proposal, the Commission describes how hard it is to offer services across borders in comparison with the provision of services in single Member States. The Commission refers to the fact that providers who want to offer services in several Member States need to report and notify this in several countries and also have to pay fees in those different countries. The Commission intends to put an end to this situation.

In article 4, the Commission therefore propose, that the European electronic communication providers need to submit only one notification (single authorisation) to the supervisor in their home Member State. However they have to do this notification in the language of both the home Member State if in the languages of all the other Member States where they want to start offering services.

The reason for this is that the home Member State will inform the other Member States about the notification. Although even as a single notification is sufficient, , the provider remains of course bound to the possible deviating regulation in the Member State in which it operates.

Regarding the contribution to the cost of universal service and the costs of the supervisor, the European provider only has to deal with the home Member State.

Each Member State shall check whether a provider follows the national regulations and inform the home Member State if this is not the case. Article 6 States that only the home Member State can suspend or halt the right of a provider to provide services, whether or not at the request of another Member State in which the infringements would have occurred.

21. HARMONISATION WITH REGARD TO THE SINGLE AUTHORISATION. The proposed measures indicate that the Commission wants to use maximum harmonization to remove the unnecessary obstacles in the authorisation regime and in the rules applying to service provision so that an authorisation obtained in one Member State is valid in all Member States, and that operators can provide services on the basis of consistent and stable application of regulatory obligations.

22. ASSESSMENT OF THE MEASURES WITH REGARD TO THE SINGLE AUTHORISATION. The report of the Council of the European Union mentions that most delegations were sceptical with regard to a single EU authorization as they esteems it to be not clear what concrete problems are addressed and why those problems could not be solved by better implementation of the current framework. There is also the risk of unequal treatment of EU and national providers, risk of forum-shopping and the impact on competences of host/home NRAs and that the proposal would increase complexity, administrative burden and related costs.

BEREC expressed its support towards the elimination of unreasonable obstacles to the provision of cross-border services across the EU but is not in favor of a single authorisation. BEREC fears that providers will be hampered by the fact that the notification should be done in all languages and will not reduce operators’ administrative burden. Furthermore the introduction of ‘home’ and ‘host’ supervisors could result in a special relationship between the provider and the parent national supervision. In addition, the proposal will lead to differentiated regulation for providers who are only active in one country compared to providers who want to offer services in more Member States.

Therefor BEREC proposes a single notification via a European template which would be the right balance between, one the one hand, a light authorization regime, ensuring a smooth access to the market by operators, and on the other hand, the need of National Regulating Authorities (NRA) to ensure basic market monitoring.

The Committee on Industry, Research and Energy (ITRE) is also not convinced of the proposal and proposes to delete most of article 3 and the articles 4 to 7.

3.2 Harmonisation in the domain of radio spectrum coordination in the EU.

23. RADIO SPECTRUM COORDINATION. The internal market for electronic communications and radio equipment is still far to be complete and with the proposal the Commission wants more coordination of use of radio spectrum within the Single market which will ensure a synchronised availability of spectrum input and the application of consistent conditions attached to its use across Europe, thereby ensuring an efficient use of spectrum. At the same time, this would support a predictable investment environment for high-speed networks, including their wide territorial coverage, which is also a long-term end-user interest. This would mean that Europeans get more 4G mobile access and Wi-Fi. Furthermore mobile operators will be able to develop more efficient and cross-border investment plans, thanks to stronger coordination of timing, duration and other conditions of assignment of spectrum. Member States would remain in charge, and continue to benefit from related fees from mobile operators, while operating within a more coherent framework. Such a framework will also expand the market for advanced telecoms equipment.

24. According to the Commission it is time that harmonization takes place, especially regarding the conditions, procedures, costs and duration of frequency licenses as only in this way the emergence of the scale which among other things is needed is facilitated for the cost-effective development of network equipment and mobile peripherals:

To put an end to this unsustainable situation, harmonisation of spectrum inputs must be ensured by:

‘ Defining common regulatory principles applicable to Member State when defining conditions on the use of spectrum which is harmonised for wireless broadband communications.

‘ Empowering the Commission to adopt implementing acts to harmonise spectrum availability, the timing of assignments and the duration of rights of use for spectrum.

‘ A consultation mechanism enabling the Commission to review draft national measures concerning the assignment and the use of spectrum.

‘ Simplifying conditions for the deployment and provision of low-power wireless broadband access (‘Wi-Fi’, small (‘Wi-Fi’, small cells) to enhance competition and reduce network.

25. As mentioned above, the Commission proposes in the articles 8 to 16 of the Connected Continent legislative proposal several measures to harmonise the virtual access to fixed networks in order to provide cross-border services and aiming at the gradual removal of national barriers [to the creation of the internal market], including [‘] different national conditions for allocating and assigning spectrum.

26. In article 8 some measures are proposed intended to increase the harmonisation of radio spectrum use for wireless broadband communications across EU member states. This means that the scope of the draft regulation is limited to the harmonized spectrum radio spectrum for wireless broadband communications and that the draft regulation is without prejudice to the right of Member States to take advantage of the fees to be paid by the operators for the use of the spectrum and the right of the Member States to organize and use their radio spectrum for public order, public security and defence.

27. The National regulatory authorities (NRA) are required to apply common regulatory principles and criteria when defining the conditions attached to the licences foruse of harmonised EU radio frequencies for broadband communications as described in articles 9, 10 and 11. Moreover, NRAs must comply with specified authorisation conditions when defining assignments procedures. In particular, they are asked to establish timetables for assignment procedures, which will be used to set up a common timetable at EU level. With article 12, the duration of the radio spectrum rights of use will have the same duration in all EU countries.

28. In article 13 procedures for the coordination of authorisation and conditions for the use of radio spectrum for wireless broadband in the internal market are proposed. These procedures create a cooperation mechanism involving the NRAs and the Commission for better coordination of national assignment procedures and authorisation conditions. A veto-right is also established because the Commission would have the power to review draft national assignment procedures and require amendments. And more than that, the EC could make a proposal for the withdrawal of the national assignment procedure in case this goes against the required conditions for harmonisation.

29. The Commission wants also to extent the proposal to that part of the spectrum used for Radio Local Access network (RLAN or Wi-Fi) and low power small size cellular access points (articles 14 and 15 of the proposal).

Lastly, the removal of national barriers to the internal market is pursued by promoting greater coordination between EU member states to ensure the same conditions of access to radio spectrum across the EU. In this regard, the EC has the right of intervention in case of inconsistencies that tilt against cross-border coordination (article 16).

30. HARMONISATION. With regard to the harmonization of the radio spectrum management, according to M. Massaro and E. Bohlin it is important to make a clarification on the concept of harmonisation. In fact, two types of harmonisation can be distinguished. A first top-level of harmonisation regards the allocation of radio spectrum to certain uses, meaning that specific radio spectrum bands and technical conditions applied to these radio spectrum frequency bands to avoid interference are harmonised at EU level. In this regard, with the adoption of the Radio Spectrum Decision in 2002 the Radio Spectrum Committee (RSC) has been set up. Top-level harmonisation measures are adopted thanks to the work undertaken by this Committee. The RSC is composed of Member State representatives and chaired by the European Commission. The RSC develops technical conditions for the use of a specific radio spectrum band that are then included in what are called ‘implementing decisions’. This is a well-established cooperation mechanism between the European Commission and the EU Member States and this is the reason why this type of harmonisation is not an issue in the EU.

The second type of harmonisation regards instead the assignment of radio spectrum frequencies to users. This is a distinct level of harmonisation because unlike allocation, the assignment of radio spectrum frequencies is a national responsibility. Radio frequencies or radio frequency channels within each allocated radio spectrum band are assigned to specific individual users by means of national authorisations. National regulatory Authorities (NRAs) decide on the conditions that are attached to the national award procedure (e.g. spectrum caps) and to the actual licenses that are awarded (e.g. the duration of a licence). These conditions vary a lot from country to country and the EU has been unable to set common criteria to be used by NRAs when assigning radio spectrum. For this reason, the European Commission aims to solve with the Connected Continent legislative proposal this situation by partly modifying the existing regulatory system of radio spectrum.

31. ASSESSMENT. Member States reported through the Council of the European Union that they believe that the existing instruments and institutional set-up should be used in a more effective manner to deliver the expected results. Furthermore many of the new provisions are considered to be too prescriptive and often overlapping or even conflicting with provisions of EU or national legislation. Some Member States could also imagine another legal instrument, e.g. a Commission recommendation, to be better fit to deal with these issues.

32. The proposals with regard to domain of radio spectrum coordination in the EU give far-reaching powers to the European Commission at the expense of the powers of national supervisors. The Body of European Regulators for Electronic Communications (BEREC) contests this approach and states that the concession of spectrum a issue is with a political nature and therefore not should be treated in a technical procedure. Furthermore the fact of imposing tightly defined parameters and detailed criteria to be taken into account when awarding spectrum, risks to hamper innovation and regulatory advances. The proposal also introduces a new layer of bureaucracy which could slow down spectrum release, and would not necessarily lead to more efficient spectrum usage.

The European Parliament has also proposed in its reading of the proposal the introduction of minimum licence terms of 25 years for spectrum in harmonized bands, which would be retroactive, and also provisions to facilitate spectrum trading. BEREC fears the consequences of the proposed measures as changing licence durations retroactively.

BEREC believes that the proposed harmonisation objectives could be more effectively achieved by less intrusive, more focused and proportionate means within the current institutional set-up. Consideration should be given to the possibility of developing best practices around auction design (including around terms such as licence duration and conditions for spectrum sharing, about which the Commission has expressed concern), within the existing framework.

33. The question is what will be the effect of these proposed measures in practice. Regardless of the final interpretation of the regulations, the effects will probably be limited in the short and medium term. All frequency bands that are currently being used have been auctioned and will therefore not be available for the market. In addition, a number of countries have recently completed the multiband – auction or will do this within short term. Since the European Commission has no powers to intervene in the already licensed spectrum, it could take some time before the envisaged measures for the currently important frequency bands will reach the intended effect of a uniform spectrum management.

3.3 Harmonised rights for end-users.

3.3.1. Harmonisation of end-user rights.

34. The Commission considers that greater harmonisation in the area of the protection of the end-users by means of a new regulation is necessary because not enough has been achieved this with earlier directives. The Commission proposes to replace the consumer protection provisions of the Universal Service Directive (Directive 2009/136/EC) with new provisions.

The Telecommunication Single Market proposal should address the problems consumers face in a fragmented European market by introducing certain common consumer standards; removing charges for incoming calls while roaming, as well as unjustified surcharges for intra-EU calls; and introducing new, common consumer protections, including to safeguard access to the open internet.

The main consumer protection proposals are:

‘ Article 23 Net neutrality: while operators would be allowed to provide ‘enhanced quality’ services over the net, they would not be allowed to impair the general quality of access to other internet content when providing these services. This issue will not be discussed in this paper.

‘ Article 25 and 26 Transparency: operators should be required to publish more detailed information on the terms and conditions under which services are provided, including process and quality.

‘ Article 27 Expenditure control: operators should be required to provide ways to control their expenditure and avoid bill shocks.

‘ Article 28 Contract termination: the conditions under which consumers can terminate contracts without having to pay penalties would be regulated.

‘ Article 30 Switching: the proposal seeks to harmonise various elements of the switching process. This includes a requirement that the process is led by the receiving provider.

Furthermore, the Commission is of the opinion that consumers will also profit from providers in other Member States if they know that they can rely on the same set of rules.

35. END-USER? The wording used in the articles in Chapter 4 of the draft Regulation differs with regard to the extent of the provisions for professional end users. In the articles21, 23, 24 and 27, the term ‘end-user’ is used and includes the professional end-users. Article 25 is applicable ‘for offers which are individually negotiated’. Article 26 mentions in the first paragraph ‘consumers, and other end-users unless they have explicitly agreed otherwise’ and in the second paragraph ‘end-users, unless otherwise agreed by an end-user who is not a consumer’. Article 28, first paragraph talks about ‘consumers’ while the second paragraph mentions ‘Consumers, and other end-users unless they have otherwise agreed’. Article 30 is applicable for ‘all end-users’. Consideration 40 of the proposal stipulates: Where the provisions in Chapters 4 and 5 of this Regulation refer to end-users, such provisions should apply not only to consumers but also to other categories of endusers, primarily micro enterprises. At their individual request, end-users other than consumers should be able to agree, by individual contract, to deviate from certain provisions.

There is little consistency to be found in these descriptions. Apparently the Commission wants the regulation to be applicable to all end-users unless the non – consumer wants a deviation?

36. The current EU regulatory framework provides for sector-specific consumer protection for users of electronic communications services. A number of the EU provisions in this context have an enabling character and the telecoms consumer rules in general are considered a set of minimum harmonisation measures. This means that EU consumer protection rules in telecommunications are now implemented with varying levels of detail, focus and impact at national level.

In consideration 40 of the proposal, the Commision clearly shows its intention to remove barriers to the internal market by replacing existing, divergent national legal measures with a single and fully harmonised set of sector-specific rules which create a high common level of end-user protection.

37. According to the consideration 71, the European Commission seeks to introduce maximum harmonization as the minimum harmonisation of end-users rights provided in Directive 2002/22/EC will become redundant by the repeal of provisions on minimum harmonisation of end-users rights provided in Directive 2002/22/EC made redundant by the full harmonisation provided in this Regulation.

3.3.2. Minimum or maximum harmonisation and consumer protection.

38. The question is now to assess the appropriate degree of harmonisation to be used in order to reach the objectives set by the European Commission in its proposal. Professor Marco B.M. LOOS of the Centre for the Study of European Contract Law of the University of Amsterdam, explains in his paper Full harmonization as a regulatory concept and its consequences for the national legal orders. The example of the Consumer rights directive that with regard to the functioning of the internal market, the European Union has exclusive competence in so far as the establishment of competition rules is concerned and shared competence in all other areas. Where consumer protection is concerned, the Union and the Member States have shared competence as well. Article 2 paragraph 2 of the TFEU indicates that until the European Union has made use of its competence, the Member States remain free to legislate and adopt legally binding acts. However, when the Union does exercise its competence, Member States are no longer free to regulate these matters themselves. This implies that when harmonisation takes place, and unless the harmonisation measure taken by the European Union provides otherwise, the Member States are no longer free to maintain or introduce national legislation in the harmonised area.

39. In the area of consumer protection, article 169 paragraph 2 TFEU indicates that measures may be taken (a) ‘pursuant to Article 114 in the context of the completion of the internal market’, as well as (b) to support, supplement and monitor the policy pursued by the Member States. Measures not taken in the context of the internal market may, however, only be based on minimum harmonisation, paragraph 4 of article 169 indicates. However, where harmonisation is intended to contribute to the completion of the internal market, the European Union is free to choose between minimum and full harmonisation. Article 114 paragraph 3 TFEU only requires that in developing the harmonisation measure the Union must take ‘as a base a high level of consumer protection’. Article 114 TFEU provides the possibility of full harmonization.

40. Consumer protection has historically been regulated in the EU using a minimum degree of harmonisation, which has allowed the Member States to maintain or implement stricter legislation and therefore differing (and higher) levels of consumer protection in certain areas. On the basis of minimum harmonisation clause, Member States are allowed to introduce or maintain consumer protection rules that exceed the level of protection offered by these directives. That makes it easier for Member States to absorb a directive into their national legislation as only the minimum requirements of the directive must be met. However, this implies of course that the effect of the harmonisation measure is limited as the aim of harmonisation is to approximate the regulation of the Member States; the national laws in fact still differ when Member States make use of the minimum harmonization clause and as consequence consumers and firms cannot be sure if they will have the same level of protection as in their country when they are seeking services abroad. Therefore, minimum harmonization is assessed as not to remove the barriers to the internal market in a sufficient way.

41. When the European Union uses full harmonisation, Member States are required to revoke national legislation that is not conform with the proposed level of consumer protection, irrespective of whether the existing national level of consumer protection is higher or lower than the new European level. Full harmonisation therefore leads to a uniform level of consumer protection throughout the European Union. The Commission is of the opinion that full harmonization will remove the barriers to the internal market that result from the existence of different rules in the Member States. From the perspective of the functioning of the internal market, full harmonisation of ‘key concepts’ of European consumer law is mandatory in order to create legal certainty for both consumers and businesses.

Nevertheless, the Commission states in the proposal its wish that the full harmonisation of the legal provisions should not prevent providers of electronic communications to the public from offering end-users contractual arrangements which go beyond that level of protection.

3.3.3. Harmonisation of different domains of end-users rights.

3.3.3.1. Harmonisation of required information.

42. HARMONISATION OF REQUIRED INFORMATION AND TRANSPARENCY. Article 25 sets out detailed rules to the providers about the publication of transparent, comparable, adequate and up-to-date information. These transparency requirements are valid for consumers and business users who contracted services based on standard propositions. The information shall be published in a clear, comprehensive and easily accessible form in the official language(s) of the Member State where the service is offered, and be updated regularly. The information shall, on request, be supplied to the relevant national regulatory authorities in advance of its publication. The information obligation of article 25 could become a great administrative burden for the national regulators as all information need to be checked within a reasonable period of time.

43. Article 26 contains detailed requirements for information which should be provided at the conclusion of the agreement with the end users. It will not be easy to for the obligations resulting from article 26 to develop clear parameters and measuring methods in order to determine, for example, the internet speeds. Paragraph 3 of article 26 requires the internet service provider to make the obligatory information available in an understandable and easily accessible format and in the language of the country of the end user and to keep this information regularly up to date. The information is an integral part of the contract and cannot be changed unless the contracting parties agree. The end-user shall receive a copy of the contract in writing. The provisions of article 26 seem to contain a contradiction since it is impractical to keep the information regularly up to date and to reach for each change simultaneously an agreement on the adjustment of the existing contracts. This would mean that any form of adjustment of practical information would lead to a contract renewal. This can surely not be the intention of the Commission.

3.3.3.2. Harmonisation in the domain of expenditure control.

44. HARMONISATION OF EXPENDITURE CONTROL. The article 27 paragraph 1 stipulates that providers have to give end users a free of charge possibility to obtain information of their consumption of services and that a financial limitation set by the end-user is not exceeded.

3.3.3.3. Harmonisation in the domain of contract termination rates.

45. HARMONISATION IN THE DOMAIN OF CONTRACT TERMINATION RATES. In article 28 of the proposal, the Commission want to limit the contracts with end-users to 24 months duration and the end-user can always terminate the contract after a period of six months. This seems to be in line with the actual Belgian legislation. The proposal stipulates in paragraph 2 that no compensation shall be due other than for the residual value of subsidised equipment bundled with the contract at the moment of the contract conclusion and a pro rata temporis reimbursement for any other promotional advantages marked as such at the moment of the contract conclusion. Such subsidies are not allowed in Belgium. In Netherlands the real depreciation period is used and therefore the time frame of six month is too short.

According paragraph 5 of article 28, any significant and non-temporary discrepancy between the actual performance regarding speed or other quality parameters and the performance indicated by the provider shall be considered as nonconformity of performance for the purpose of determining of the contract. This description is not clear. It is not clear how this paragraph should interpreted and how the assessment should be executed. There are no concrete measures proposed by the European Commission on the basis of which an assessment is given and by whom. This will entail legal uncertainty and the consequences are left to national law. The same uncertainty is raised by the words used in paragraph 6 as no definition is provided for the terms ‘significantly exceed’ or ‘special promotional price’.

3.3.3.4. Harmonisation in the domain of international mobile calls.

46. HARMONISATION IN THE DOMAIN OF INTERNATIONAL MOBILE CALLS. Within the EU Connected Continent proposal, the Commission intends via Amendments to Regulation (EU) No 531/2012 to measures to gradually end mobile roaming surcharges as part of a single market for electronic communications. The Commission wants to create a true European communications space by phasing out the difference in charges paid for domestic, roaming and intra-EU calls:

– Roaming: operators will lose the right to charge for incoming calls while a user is travelling abroad in EU as of 1st July 2014.

– European fixed calls: operators will have to charge no more than a domestic long-distance call for all fixed line calls to other EU member states. Any extra costs have to be objectively justified.

– European mobile communication: operators will have to charge no more than the euro-tariffs for regulated voice and SMS roaming communications for mobile communications to other EU member states. Any extra costs have to be objectively justified.

47. The Commission wants to reduce the roaming rates in its proposal to the level of national rates. Article 37 builds on the Roaming Regulation, providing incentives to operators to provide roaming at domestic price levels. The proposal introduces a voluntary mechanism for mobile operators to enter into bilateral or multilateral roaming agreements which allow them to internalise the wholesale roaming costs and to gradually introduce roaming services at domestic price levels up to July 2016 while limiting the risk of price arbitrage. . According to many Member States, the proposal creates legal uncertainty for the mobile providers, because the current roaming regulation entered into force and the providers are obliged to make high investment costs. The existing legal framework should be given the opportunity to show its merits before additional changes are proposed.

48. This part of the proposal even as it is strongly supported by the European Parliament in April 2014 encountered resistance from the Member States as they reached on 05 March 2015 a compromise on the abolition of the roaming charges. The Member States have agreed that mobile roaming charges should stay in place until the end of 2018, in opposition to European Commission. A majority of the Member States in the European Council has voted in favor of keeping roaming charges in place until at least 2018, but they plan to introduce measures to make it cheaper to use mobile phone when travelling within the EU. The charges will be allowed until 30 June 2018 or three years longer than what was planned. The European Parliament wanted to end roaming charges on 15 December 2015.

3.3.4. Position of stakeholders with regard to harmonised rights for end-users.

49. BEREC is supportive of the objectives of the Connected Continent legislative proposal to increase the level of consumer protection across the European Union but is questioning the upgrade of the minimum harmonisation provisions adopted in the Directive 2002/22/EC of the European Parliament and of the Council of 7 March 2002 on universal service and users’ rights relating to electronic communications networks and services (Universal Service Directive) to a ‘fully harmonised’ mandatory framework in the proposal of the European Commission. BEREC fears that due to the ‘one size fits all’ approach, Member States and NRAs risk being deprived of the ability to respond to the changing needs of their respective markets and national consumers in the future. In some cases Member States and NRAs may even have to step back from measures already in force, reducing rather than enhancing consumer protection. The full harmonization approach is considered as not to be proportionated and could lead to unintended consequences which ultimately go against the consumer interest. These concerns are reinforced given that the nature of the legal instrument removes the possibility of discretion at the national level to decide how best to implement the provisions.

50. The European Economic Area Standing Committee of the EFTA states supports the Commission’s intention to establish a consumer friendly internal market for electronic communications services. However, the Committee find it unfortunate that the Commission has put forward a draft regulation entailing total harmonisation of end-user rights contrary to the current regulatory framework, according to which Member States may apply a stricter regime for the protection of consumers.

3.3.5. Assessment of the harmonisation and end-user protection.

51. Especially from the different studies about the harmonisation of the protection of consumer rights, it becomes clear that maximum harmonisation is not easy to reconcile with the aim of consumer protection, especially not where the regulation would require Member States to revoke protective provisions that exceed the maximum level of protection allowed under the proposed regulation. Maximum harmonisation could therefore lead to a reduction in end-user protection. Minimum harmonisation could provide better results, as end-users will at least receive the protection that is offered by the regulation, but Member States are allowed to introduce or maintain more protective rules.

52. But even in those areas where maximum harmonisation would seem feasible, for example with regard to matters of a technical or procedural nature, it is doubtful whether this instrument is appropriate in all cases. To take the example of the obligations to inform: if these were fully harmonised, would this mean that Member States are no longer allowed to impose stricter or more detailed information obligations in specific areas?

53. Member States seem to have a more favourable view on the consumer provisions than on the other parts of the proposal but they would however support minimum harmonisation as opposed to full harmonisation as in the proposal since that would allow them to go further in consumer protection nationally, to respond to the changing needs of their respective markets and would not put in question some national measures already in force with the consequence of reducing rather than enhancing consumer protection. Related to this issue is the issue of the appropriate legal instrument, where a number of delegations would prefer to see consumer protection to be regulated in a Directive, preferably the Universal Service Directive.

4. Recommendations on the EU Connected Continent proposal and harmonisation.

54. Based on the available information, the assessment of the different documents and reports on the proposal, following comments can be made on the proposal with regard to the use of harmonization. The use of maximum harmonization is considered by many stakeholders as not being justified in all the areas covered by the proposal. Especially in the domain of consumer protection if this would lead to reduced consumers protection, or because the same outcome, e.g. better coordination of spectrum allocation even as largely acknowledged as a worthwhile objective, or international calls, could be achieved using means under the existing framework (spectrum) or by letting the market play as it is fairly competitive (international calls). This remark about making better use of the existing framework was made with respect to several of the proposed provisions.

55. COHERENT AND CONSISTENT USE OF WORDING. Throughout the proposal, several of unclear notions or wording, e.g. the description or definition of ‘end-user’, ‘significantly exceed’ or ‘special promotional price’ is used and in order to eliminate doubt and uncertainty as much as possible, a coherent and consistent use of correct wording should be introduced.

5. Conclusion.

56. On 11 September 2013 the European Commission adopted a legislative proposal for a ‘Connected Continent: Building a Telecoms Single Market’ aimed at building a connected, competitive continent and enabling sustainable digital jobs and industries. According to the Commission, previous successive waves of reform by the European Union have helped transform the way telecommunications services are delivered in the European Union but the sector still operates largely on the basis of 28 national markets despite telecom services can nevertheless be delivered across networks and borders.

57. The ‘Connected Continent’ package includes the following measures:

– Single EU authorisation: the proposal introduces a simplified procedure for network operators and service providers to obtain authorisation to operate throughout the EU, rather than needing separate notifications to operate in individual Member States. The service operator can obtain EU authorisation through a single notification to its home country National Regulating Authority.

– European inputs for high-speed broadband: the proposal includes measures to foster the development of wireless and fixed broadband networks as for example common regulatory principles applicable to the use of spectrum and a European coordination mechanism for the authorisation procedure to assign spectrum.

– Single consumer space: the proposal calls for a full harmonization of end-user rights throughout the EU (transparency, pre-contractual and contractual information, no ‘bill shocks’, and contract termination). It also forbids discrimination in price between national calls and calls towards other EU countries.

58. Numerous comments on the proposal deal with the intent of the Commission to use the mechanism of maximum harmonisation. Minimum harmonization has failed according to the Commission in order to obtain an internal digital market. The differences between Member States remain large, which present a serious obstacle.

The Commission is proposing to upgrade the minimum harmonisation provisions adopted in 2009 to a ‘fully harmonised’ mandatory framework which in effect prohibiting national governments and NRAs from maintaining or introducing any additional consumer protection provisions going forward.

The Commission intents to establish a consumer friendly internal market for electronic communications services by putting forward a draft regulation entailing total harmonisation of end-user rights contrary to the current regulatory framework, according to which Member States may apply a stricter regime for the protection of consumers. There are fears that fully harmonised provisions may hamper the current level of protection and evolution of consumer protection and have a detrimental effect on established consumer rights.

59. Maximum harmonisation is considered as ‘one size fits all’ approach and antagonises Member States and increases resistance. The choice of legal instrument for these measures should be reconsidered. Maximum harmonisation undermines regulatory innovation and the development of best practice.

6. Bibliography

European Union

European Commission (2013a), Presentation by the European Commission on the ‘Proposal for a Regulation of the European Parliament and of the Council laying down measures concerning the European single market for electronic communications and to achieve a Connected Continent, and amending Directives 2002/20/EC, 2002/21/EC and 2002/22/EC and Regulations (EC) No 1211/2009 and (EU) No 531/2012’, 25 September 2013, ITRE/7/13842.

European Commission (2013b), Digital Agenda for Europe scorecard, viewed 27 September 2013, available at:

http://ec.europa.eu/information_society/newsroom/cf/dae/document.cfm?action=display&doc_id=2374.

European Commission (2013c), Communication from the Commission – EU Guidelines for the application of State aid rules in relation to the rapid deployment of broadband networks (2013/C 25/01); available at:

http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:C:2013:025:0001:0026:EN:PDF.

European Commission (2013d), Results of the public consultation on the revision of the Recommendation on Relevant Markets; available at:

https://ec.europa.eu/digitalagenda/en/news/results-public-consultation-revision-recommendation-relevantmarkets.

European Commission (2013e), Proposal for a regulation of the European Parliament and the Council laying down measures concerning the European single market for electronic communications and to achieve a Connected Continent, and amending Directives 2002/20/EC, 2002/21/EC and 2002/22/EC and Regulations (EC) No 1211/2009 and (EU) No 531/2012, 11 September 2013, COM(2013) 627 final.

European Commission (2013f), Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions on the Telecommunications Single Market – COM(2013) 634, Policy/Legislation: 11/09/2013; available at: https://ec.europa.eu/digital-agenda/en/news/communication-commission-europeanparliament-council-european-economic-and-social-committee-a-0.

European Commission (2013g), Commission Staff Working Document: Digital Agenda Scoreboard 2013, 12 June 2013, SWD(2013), 217 final, available at:

https://ec.europa.eu/digital-agenda/sites/digitalagenda/files/DAESCOREBOARD2013-SWD2013217FINAL.pdf.

European Commission (2013h), Europeans suffering because most Member States are too slow delivering 4G mobile broadband spectrum, 23 July 2013, at:

http://europa.eu/rapid/press-release_IP-13-726_en.htm.

European Commission (2013i), Digital Agenda Scoreboard, Broadband chapter, available at:

https://ec.europa.eu/digital-agenda/sites/digitalagenda/files/DAESCOREBOARD2013-2-BROADBANDMARKETS.pdf.

European Commission (2013j), Commission Staff Working Document: Impact Assessment Accompanying the document Proposal for a Regulation of the European Parliament and of the Council laying down measures concerning the European single market for electronic communications and to achieve a Connected Continent, and amending Directives 2002/20/EC, 2002/21/EC and 2002/22/EC and Regulations (EC) No 1211/2009 and (EU) No 531/2012, SWD(2013) 331 final, 11 September 2013.

European Council (1990), Council Directive 90/387/EEC of 28 June 1990 on the establishment of the internal market for telecommunications services through the implementation of open network provision, available at:

http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:31990L0387:EN:HTML.

European Parliament (2011b), The role of ENISA in contributing to a coherent and enhanced structure of network and information security in the EU and internationally; available at:

http://www.europarl.europa.eu/activities/committees/studies/download.do?language=en&file=42251.

European Parliament (2013e), Roadmap to Digital Single Market: Prioritising Necessary Legislative Responses to Opportunities and Barriers to e-Commerce,

http://www.europarl.europa.eu/committees/en/imco/studiesdownload.html?languageDocument=EN&file=75187.

European Single Market for Electronic Communications – Achieving a Connected Continent: Initial Appraisal of the Commission’s Impact Assessment. 15 October 2013

Connected Continent: a single telecom market for growth & jobs. Published on Digital Agenda for Europe (https://ec.europa.eu/digital-agenda)

Press release Parliament, ‘Consumer rights: full harmonisation no longer an option’ http://www.europarl.europa.eu/sides/getDoc.do?language=de&type=IM-PRESS&reference=20100317IPR70798.

Academic papers.

BAEK, Y. and RANA, P., The EU’s attempt to foster a ‘Connected Continent’: Experiences from South Korea, December 2013, European Institute for Asian Studies, 6 p.

DE STREEL, A., A Program for Reforms for the European Regulation of Electronic Communications, Porto, ITS Conference, September 2005, 32 p.

DE STREEL, A., A First assessment of the new European regulatory framework for electronic Communications, Communications & Strategies, No 28, 2nd quarter 2005, p.141 ‘ 170.

DE WITTE, M. and VERMEERSCH, A., Europees consumentenrecht, Antwerpen, Maklu, 2004, 286 p.

DOUGAN, M., Minimum Harmonization and the Internal Market, Common Market Law Review, Volume 37 (2000), Issue 4, p. 853’885.

EDGAR, S., Cross-border B2C e-commerce in the EU and the introduction of the Consumer Rights Directive: A cure for fragmentation?, LLM Paper submitted under the LLM Programme 2011 – 2012 (Master of Advanced Studies of European Union Law), Ghent University, May 2012, 58 p.

GIJRATH, S., Telecommunicatierecht in het digitale tijdperk 3.0: interoperabiliteit, innovatie, internationalisering &een imploderende souffl??, Oratie, Universiteit Leiden, 17 maart 2014.

GIJRATH, S. and LODDER, A., Naar een connectief continent: eindelijk een convergentie van IP- en elektronische netwerken?, Tijdschrift voor INTERNETRECHT, Nr2, Mei 2014, p.34 ‘ 42.

HOFHUIS, Y., De Europese wetgever slaat om van minimumharmonisatie naar volledige harmonisatie: kunstgreep of noodzaak?, T.V.C. (Ned.) 2007, afl. 1, p. 6-12

KURCZ, B., Harmonisation by means of Directives – Never ending story, European Business Law Review, 2001, Issue 11/12, p. 287 ‘ 307.

LEHR, W. H. and PUPILLO, L. M. (eds.), Internet Policy and Economics. Challenges and Perspectives, 2009, Springer Science+Business Media, LLC, 225p.

LOOS, M.B.M., Full harmonization as a regulatory concept and its consequences for the national legal orders. The example of the Consumer rights directive, Centre for the Study of European Contract, Law Working Paper Series No. 2010/03, http://ssrn.com/abstract=1639436 accessed 19 February 2015, 34 p.

MASSARO, M. and BOHLIN, E., Is the European Union moving towards a strategic development of radio spectrum policy? A review of the Connected Continent legislative proposal, Division of Technology and Society, Department of Technology Management and Economics, Chalmers University of Technology, Gothenburg, 27 November 2014, 22 p.

MICKLITZ, H.W., REICH, N. and WEATHERILLS, S., EU Treaty Revision and Consumer Protection, Journal of Consumer Policy, 2004.

PARCU, P.L. and SILVESTRI, V., Electronic communications regulation in Europe: An overview of past and future problems, Florence School of Regulation, EUI, Italy, 30 p.

READ, D., Net neutrality and the EU electronic communications regulatory framework, International Journal of Law and Information Technology, Vol. 20 No. 1 2012, Oxford University Press, p. 48 ‘ 72.

SAVIN, A., Telecommunications and the Digital Single Market: Formulating EU Policy for the Digital age, Internet Policy Review, 26 February 2014, 11 p., http://ssrn.com/abstract=2403199

SICKINGHE, F. and VAN DYCK, B-J, De ontwerpverordening voor een Connected Continent Kroes’ Control, Rechtpraktijk, Mediaforum, 2014-1, p.2 ‘ 11.

STEINER, J., WOODS L. and WATSON, P., EU Law, Steiner & Woods, Ed 12, 2014, Oxford University Press, Oxford, 707 p.

STUYCK, J., Harmonisatieniveau, Consumentenrecht/Droit de la Consommation, Larcier, 2010, afl. 84-85.

Other

BEREC: Unleashing its Potential to Promote Europe’s Single Market. http://blogs.lse.ac.uk/mediapolicyproject/2014/05/21/berec-unleashing-its-potential-to-promote-europes-single-market/

CONRADI M. and AMIN R., The most ambitious plan in 26 years of telecoms market reform? No way!, DLA Piper LLP, London, UK, Computer Law & Security Review 30, 2014, 199 – 200.

ETNO position on completing the telecoms single market.

http://www.etno.be/datas/publications/studies/BCG_ETNO_REPORT_2013.pdf

EUROPEAN TELECOMMUNICATIONS NETWORK OPERATORS’ ASSOCIATION, Reforming Europe’s Telecoms Regulation to Enable the Digital Single Market, July 2013.

EUROPEAN ECONOMIC AREA STANDING COMMITTEE OF THE EFTA STATES, SUBCOMMITTEE II ON THE FREE MOVEMENT OF CAPITAL AND SERVICES, EEA EFTA Comment on the proposal for a Regulation of the European Parliament and of the Council laying down measures concerning the European single market for electronic communications and to achieve a Connected Continent – COM(2013) 627, Ref. 1126967, 4 November 2013

Fastweb’s Position on the Proposal for a Regulation of the European Parliament and of the Council laying down measures concerning the European single market for electronic communications and to achieve a Connected Continent – COM(2013) 627 – Suggested Amendments.

TWEEDE KAMER DER STATEN-GENERAAL, Raad voor Vervoer, Telecommunicatie en Energie, Verslag TELECOMRAAD 5 december 2013, Interne markt voor elektronische communicatie, vergaderjaar 2013’2014, 21 501-33, nr. 453.

White Paper ‘A Transformational Agenda For The Digital Age’ DIGITALEUROPE’s Vision 2020. Digitaleurope.org

 

Vodafone India Limited: college application essay help

Vodafone India Limited, formerly known as, Vodafone Essar Limited, is the second largest mobile network operator in India by subscriber base, after Airtel. It is headquartered in Mumbai, India. I t has approximately 173 million customers as of September 2014. It offers both prepaid and postpaid GSM cellular phone coverage throughout India with good presence in the metros.

Subscriber Base Statistics as on September, 2014

Telecom Circle No. of Subscribers

Gujarat 15,801,117

Uttar Pradesh(East) 14,526,236

Maharashtra 12,977,123

West Bengal 11,165,667

Tamil Nadu 9,777,927

Rajasthan 8,565,366

Uttar Pradesh(West) 8,999,073

Andhra Pradesh 5,224,689

Delhi 8,449,120

Goa 7,134,576

Karnataka 6,452,620

Kerala 6,067,506

Bihar 6,381,278

Kolkata 4,084,284

Punjab 4,309,853

Haryana 4,437,015

Madhya Pradesh & Chhattisgarh 4,101,877

Chennai 2,091,411

Odisha 2,789,575

Assam 2,188,073

North East 928,563

Jammu & Kashmir 666,009

Himachal Pradesh 475,329

Mumbai 6,160,353

Vodafone Group Vision

‘Our Vision is to be the communication leader in an increasingly connected world.’

Vodafone India Mission

‘Vodafone will enhance value for its stakeholders and contribute to society by providing our customers with innovative, affordable and customer friendly communications services. Through excellence in our service we aspire to be the most respected and successful telecommunications company in India.’

Essential stakeholders for Vodafone

1. Customers

‘ Enhance Value through

Delivering affordable, reliable and customized communication services which are simple to use, enjoyable, seamless and secure

‘ Understand their needs

‘ Create innovative services

‘ Consistently deliver on the promises

‘ Being transparent and trustworthy in the interactions

‘ Provide a secure and reliable network

‘ Offer affordable products and services

2. Shareholders

‘ Enhance Value through

Growing the company’s revenue and profitability while creating sustainable free cash flow through efficient resource utilization and effective risk management

‘ Follow ethical business practices

‘ Communicate in a fair and transparent way

‘ Enhance company’s reputation & brand value

‘ Doing everything to protect the shareholder’s interests

3. Employees

‘ Enhance Value through

Providing enriching careers and long term growth opportunities in a fair and collaborative work environment

‘ Provide a healthy and safe work place

‘ Encourage mutual respect, trust and appreciation

‘ Promote diversity and treat them inclusively

‘ Conduct ourselves with transparency and integrity

‘ Pursue speed and simplicity in all that we do

‘ Recognize and admire accomplishments

4. Community

‘ Contribute to the Society by

Supporting authorities in mobilizing social change and achieving their economic goals, creating value for the business partners and contributing to the social and economic development of local Communities

‘ We act responsibly towards our environment

‘ We create community connect

‘ We stimulate business and economic growth

‘ We have high standards of corporate governance

‘ We conduct our business with transparency integrity and fairness

II. Literature review

Productivity is characterized as evaluation of monetary effectiveness that puts a keep an eye on whether financial inputs are being transformed into yields in a productive and successful way. Yield alludes to items or administrations and data contains materials, work, capital, and vitality.

Performance of the organization is connected to “viability, effectiveness, quality, profitability, nature of work life, developments, and gainfulness. Profitability is connected to inputs and yields while gainfulness is connected to inputs, yields, and cost. Associations measure efficiency as a method for surveying authoritative quality and making arrangements for expanded profitability. Associations like to quantify downright profitability at whatever point conceivable. The aggregate measure of efficiency is the general yield partitioned by the totality of inputs. Difficulties to successfully measuring aggregate efficiency emerge when inputs and yields cover or develop rapidly.

Types of Productivity

There are various classes of productivity including labour productivity, firm or authoritative productivity, and individual or worker productivity. Labour productivity alludes to the proportion of yield to inputs. It mirrors the proficiency with which work is utilized as a part of generation instead of the exertion per specialist. Business analysts mull over the average labour productivity (ALP), which alludes to the proportion of yield to hours met expectations. The development of normal work profitability has three sources: capital developing, work quality, and total factor productivity (TFP) development.

‘ Capital developing alludes to the increment in capital administrations every hour lived up to expectations

‘ Labour quality alludes to work info every hour met expectations

‘ Total factor productivity development mirrors the work efficiency development not owing to capital developing or work quality increases. Business analysts concentrate on work profitability as opposed to more mind boggling measures of many-sided quality in substantial part because of the relative simplicity of social event and processing work efficiency information. Organizational productivity is affected by the accompanying variables and variables: work propensities, work atmosphere, staff emotions and state of mind, new aptitudes, progression, activity, and physical workplace.

‘ Work habit considerations incorporate non-attendance, lateness, and breach of safety rules.

‘ Work atmosphere incorporates the quantity of grievances, representative turnover, and occupation fulfilment.

‘ Personnel sentiments incorporate disposition changes, favourable responses, and changes in execution.

‘ New aptitudes incorporate choices made; clashes abstained from, listening abilities, perusing pace, and recurrence of utilization of new aptitudes.

‘ Advancement incorporates increments in occupation viability, number or advancements and boosts in compensation, and solicitation for exchange.

‘ Initiative incorporates number of proposal submitted executed, and effective finish of activities.

Profitability is firmly identified with motivation. For employees to be compelling and productive in their employment area, having the specialized learning and capacity is not so sufficiently much. The representatives likewise require the assets needed to carry out the occupation. They require steady administration and initiative with a dream that is adjusted to their own objectives and goals. Above all, a worker needs to be driven (or persuaded) by a few intends to attain desired level of performance. Representatives are altogether affected by the initiative and administration styles utilized by their administrators and bosses. In a period where development and change is normal, the transformational authority style can be firmly fixed to representative execution. Much research has shown that conduct and execution are decidedly affected by transformational administration. Rousing workers is a noteworthy component of that authority style. The pioneers who routinely occupied with their five prescribed practices’display the way, motivate a mutual vision, challenge the procedure, empower others to act, and energize the heart’are more beneficial in their occupations, as well as they are seen as better pioneers and have higher occupation fulfilment.

Another model of performance was created by T.R. Mitchell and D. Daniels. To begin with, the worker brings certain abilities, for example, work learning, qualities, feelings, and convictions to the working environment and the employment setting. Second, the work environment boss gives the occupation connection, for example, environment support, work culture, prizes, and task types. Third, the supervisor of the representative uses forms that persuade the worker, for example, giving them consideration and heading, making excitement, or being extreme or tenacious. These three things (worker inputs, work setting, and inspiring procedures) bring about the propelling conduct or drive that workers have towards finishing assignments or accomplishing objectives. The subsequent persuading conduct incorporates worker centre, exertion, methodology, and tirelessness in finishing the wanted destinations or assignments. The model is taking into account the declaration that execution and efficiency is straightforwardly identified with propelled conduct, and that chiefs and pioneers need to see how to propel their workers, notwithstanding giving the specialized aptitudes and an obliging workplace.

III. Theory

Profitability is a measure of the effectiveness of a man, machine, processing plant, framework, and so on, in changing over inputs into helpful yields. It is figured by segregating normal yield per period by the aggregate expenses acquired or assets (capital, vitality, material, labour) used up in that period. Efficiency is a basic determinant of expense productivity.

There are three valuable points of view in which to edge the worth in enhancing efficiency inside of a framework from a monetary angle:

‘ Businesses: Businesses that can get higher efficiency from a framework by making more yields with the same or less inputs. Basically, higher effectiveness equates to better edges through lower expenses. This takes into account better remuneration for workers, additionally meeting expectations capital and an enhanced competitive capacity

‘ Consumers/Workers: At the micro level we have enhancements in the way of living for ordinary consumers and labourers as a consequence of expanded profitability. The more effectiveness caught inside of a framework, the lower the inputs (work, area and capital) required producing products. This can conceivably decrease price points and minimize the essential working hours for the members within an economy while holding large amounts of consumption

‘ Governments: Higher monetary development will likewise create larger tax payments for governments. This permits governments to contribute more towards social services and infrastructure

How can productivity be raised’? Training can improve the knowledge, skills and attitude (KSA) of the employees. Improved recruitment and selection may have the similar effect but it’s a little costly process

‘ Investment in the infrastructure, machines, equipment and new technology may have an impact on output per worker to improve

Hazard and Operability (HAZOP) Study – Production of Diammonium Phosphate (DAP)

Introduction

1.1 Background

HAZOP stands for Hazards and Operability Study and it is an important systematic examination of an existing operation to identify and eliminate potential health and safety hazards in the workplace.

In this study, our team will be examining the batch reaction between ammonia and phosphoric acid, which produces Di-Ammonium Phosphate (DAP). The reaction has the following equation:

2NH3(aq) + H3PO4(aq) ‘ (NH4)2HPO4 (aq)

This reaction produces large amount of heat, having a heat of reaction of approximately 1566.9kJ/mol (Indiana University, 2013) Temperature in the reactor increases immensely, converting any aqueous medium present into steam. The presence of steam may give rise to violent boiling of the reaction mixture, leading to sudden surges which could be dangerous. A solution to alleviate this problem would be to have a reactor tank with a larger surface area. However, the presence of steam still might affect the reaction systems, hence, steam must be quickly removed from the reactors so as to ensure proper functioning of the systems. One possible solution would be to install a vent within the vessel. Installing a vent near the upper end of the cooling column would allow for steam removal and prevent pressure build-up. A thermal runaway could also happen due to the immense amount of heat released from the reaction.

Therefore, it is important to ensure that the temperature, pressure, concentration and flow rates in the feed flow and reactors are well regulated by appropriate devices. Unregulated temperature increase in the feed flow would lead to a large amount of energy release. There is a possibility of the reactor vessel rupturing if the increase in temperature and pressure is beyond the specifications of the vessel, leading to the release of hazardous chemicals. Hence, it is important to keep in mind the above-mentioned properties (temperature, pressure, concentration and flow rates) when doing reactor designing.

1.2 General Uses of DAP

DAP is one of a series of ammonium phosphate salts that can dissolve in water and has many uses. It is a common phosphorus-containing fertilizer for industrial usage and it provides an excellent source of Nitrogen and Phosphorus, which are crucial for plant growth. It is highly popular in the industry for its high nutrients source due to its high solubility in soil, allowing for quick release of Phosphorus and Nitrogen. Besides agricultural usage, DAP can also be used as a fire retardant or as a yeast nutrient in winemaking and mead brewing.

Hence in this report, we will be conducting a HAZOP study on this batch production of DAP to identify potential hazards and risks during or after the operation and brainstorm solutions to mitigate such risks.

Hazard and Operability Analysis (HAZOP)

2.1 HAZOP Assumptions

Additional assumptions are made for this HAZOP analysis and they are:

a) The design specifications mentioned above are strictly adhered to.(Cheah, 2005)

b) The design specifications are based on legal requirements, industrial standards and engineering codes.

c) A basic HAZOP form will be used for recording purposes.

d) Analysis will focus on causes and consequences such as plant design equipment error and human error.

e) The operation of process is not higher than operation design previously assumed. (Sutton Technical Books, 2012)

2.2 Deviation 1 (High Level)

Process Section: Ammonia solution storage tank

Design Intention: Safely contain ammonia feed at ambient temperature and pressure

Guide Word: High

Process Parameter: Level

Deviation: High Level

2.2.1 Causes

1) Level indicator fails low

This may cause the display to show a lower number than the actual quantity. This will cause the amount of ammonia that is unloaded to be higher in quantity than it is supposed to be. This might cause higher ammonia level in the storage tank.

2) Human Error

The level set point could be too highly set by the operator. This will lead to more ammonia being unloaded in the tank. Moreover, the operator may set the flow rate of valve A too slow, which will lead to very slow outflow, which results in the accumulation of ammonia in the tank. In addition, the operator may purposely overfill the tank than the intended level. All these will result in a very high level in the tank.

3) Malfunction of the unloading control system

The ammonia is unloaded from the station where the outflow might be high. This can lead to the increase in unloading rate of ammonia which can increase the level of ammonia in the tank.

4) Backflow of content from DAP reactor

A backflow from DAP reactor into the pipe line and storage tank can cause the high level of ammonia in storage tank. This could be due to high pressure in the DAP reactor caused by cooling system dysfunctions.

5) Plugging of the line

The impurities in the solution may cause the obstruction of pipe, which can lead to a lower flow rate of ammonia through the pipe, which causes the ammonia to build up and therefore, increasing the level of ammonia in the tank.

6) Flow indicator fails high

The flow indicator displays a higher value than actual. Together if the outflow rate is too slow and when the operator follows the display, there might be accumulation of ammonia in the tank causing the level to rise.

2.2.2 Consequences

1) Release of Hazardous Ammonia into Surrounding. (how it is relaesed)

a) Backflow of ammonia solution and the product back into the tanks and its unloading pipes

b) The cross contamination of phosphorus with ammonia in the feed lines, there might be a chemical reaction that is exothermic and therefore, will vaporise a portion of the ammonia. This ammonia will increase the pressure in the pipe line and hence, the feed lines may be rupture and the ammonia can leak into the surrounding

c) Ammonia will be released when the pressure valve, at the ammonia tank, is overflowing. The ammonia released will flash and mix with air

2) Health and Environmental Hazard ( hazards of release)

a) There are health risks when one is exposed to high level of ammonia in air. These effects include mild irritation to the respiratory system and mucous membrane to convulsions, coma and possibly death.

b) When the released ammonia gets into the water bodies, it can be thereat to the aquatic life.

3) Fire Hazard

a) The vaporised ammonia in air flammable if its composition is between the lower and upper flammability limit.

b) A fire may cause pressure in the other equipments that results in explosion, which further released combustible chemical that can exacerbate the fire.

4) Mechanical Failure of Equipment

a) High level ammonia in the tank can cause high pressure and high flow rate. High flow rate can damage the valves such as valve A and pressure relieve valve.

b) High pressure build up in the storage tanks may also rupture the tanks.

c) The excess phosphoric acid can corrode the containers

5) Economical Loss

a) There will be a loss of DAP production, as the release of ammonia from pressure valve will lead to a lower concentration of ammonia in the storage tank and therefore, lesser conversion of ammonia into products.

2.2.3 Safeguards

1) Relief valve

2) Level indicator

3) Flow indicator

2.2.4 Actions

1) Periodic inspection and maintenance

a) Ensure regular maintenance of level indicator, valve A and flow indicator.

b) The relieve valve and tank should be replace if there is any rupture or leakage.

c) Level indicators must be calibrated regularly to ensure correct readings are being made.

d) Ensure regular maintenance and inspection on fire fighting equipments.

2) Installation of Additional Hardware

a) Consider installing new alarm systems to alert any abnormality in the system such as high ammonia in the tank.

b) Installation of scrubber to remove ammonia vapour, if it is released.

c) Installing a connector to ensure that the flow is in one direction – preventing reverse flow.

d) Install a overflow line or over flow detector to prevent high flow and to ensure the level respectively.

e) Consider installation of ventilation to dilute ammonia concentration to levels below ERPG-1 (25ppm).

f) Consider installing a fire sprinkler system with dikes to contain and absorb ammonia vapour if it is released.

3) Precautionary measure

a) HAZMAT kit should be placed near the operation site to contain spillage. Inspection of this kit is also necessary.

b) Technical manual kit should be kept near the operating system so that the operator can know how to operate the system safely.

c) Review should be done to be sure of the level of ammonia in the tank before any unloading.

2.3 Deviation 2 (High Flow)

Process Section: Ammonia feed line to the DAP reactor

Design Intention: Deliver ammonia to reactor at y gpm and z psig

Guide Word: High

Process Parameter: Flow

Deviation: High flow

2.3.1 Causes

1′ Mechanical failure of valve A

Prolonged use of valve A may bring about tear and wear to valve A. With tear ad wear, the opening of valve A may be larger than it is supposed to be and higher flow rate is resulted.

2’Flow indicator fails low

Due to accumulation of rust or dust, the sensor of flow indicator may lose its sensitivity and consequently flow controller displays a lower value than actual value. When operator follows the displayed value and open valve A to a larger extent, an actually higher flow rate is resulted.

3) High pressure in ammonia storage tank

High pressure in ammonia storage tank forces ammonia to exit, leading to high flow rate.

4′ Human error

Operator inputs the setting point of flow controller at a higher value.

2.3.2 Consequences

1) Release of Hazardous Ammonia into Surrounding

High flow rate leads to higher pressure in the pipe. It is possible that high pressure causes the pipes to break or disconnect and consequently ammonia is leaked to the environment. Under high pressure, ammonia solution flashes to vapor into the environment.

a) Health hazards

Ammonia vapor, if inhaled in, is toxic to human body. In addition, after being exposed to ammonia for long time, technicians may get less alert to low concentration of ammonia leakage, which increases the risk.

b) Fire hazards

Apart from health threats to on-site technicians, ammonia may cause fire as it becomes combustible if well mixed with appropriate amount of air and ignition source is present. Heat released during fire increase the possibility of explosion. Once any explosion occurs, on-site technicians may get injured and equipment may get damaged, leading to further leakage and even more hazards.

c) Environmental hazards

Leakage of ammonia to water threatens the aquatic life. Drinking water may also be contaminated.

d) Insufficient ammonia for production of DAP

Leakage of ammonia leads to a lower ratio of ammonia to phosphoric acid feed into DAP reactor. The possible consequences are:

i) Product quality is compromised;

ii) Excessive phosphoric acid may corrode the metal material of DAP reactor and DAP storage tank and produce hydrogen gas. Weakened walls of reactor and tank are more vulnerable to high pressure. Apart from ammonia vapor, the accumulation of hydrogen gas poses further hazards to the weakened walls. Furthermore, flammable hydrogen gas at the outlet of DAP reactor may induce fire and explosion with the presence of ignition source and appropriate amount of air. Again, on-site technicians may get injured and equipment may get damaged, leading to more incidents.

2) Excessive ammonia feed into DAP reactor

a) Low product yield and purity

Run away reactions may occur and unwanted side products such as triammonium phosphate (TAP) is produced instead of DAP, leading to low product yield and purity.

b) Damage to equipment

Large amount of ammonia feed into DAP reactor may lead to backflow of both unreacted reactants and products into ammonia storage tank, phosphoric acid storage tank and their respective feed lines. The reaction goes on in the backflow with the presence of unreacted reactants. Since the reaction is exothermic, heat released can cause vaporization of ammonia. Increased pressure inside pipes may lead to rupture of pipes and leakage of chemicals into the environment.

3) Release of unreacted ammonia from the DAP storage tank to the enclosed work area

Unreacted ammonia solution may flash to vapor at the exit to DAP storage tank the moment valve C is open. Flashing of ammonium suddenly increases the pressure inside the pipe; therefore spillage of chemicals may occur.

2.3.3 Safeguards: flow indicator/controller F1 and valve A

2.3.4 Actions

1) Periodic inspection and maintenance

a) Ensure regular maintenance of valve A and flow indicator.

b) Valve A and feed pipes should be replaced if any rupture or damage occurs.

c) Flow indicator must be calibrated regularly to ensure correct readings are being made.

d) Ensure regular maintenance and inspection on fire fighting system.

2) Installation of Additional Hardware

a) Consider installing an alarm and a safety feed cut off valve at ammonia feed line. In the case of high ammonia flow, ammonia flow will be shut down.

b) Consider installing an ammonia detector, which displays the real-time ammonia concentration of ammonia in the surroundings and warns about concentration of ammonia above ERPG-1 (25ppm).

c) Improve ventilation system, such as installing elephant trunks, so that released ammonia vapor can be diluted spontaneously.

d) Installing a non-return valve at feed lines of ammonia and phosphoric acid to prevent backflow.

e) Add a bypass line to existing feed line of ammonia so that if valve A fails, the flow rate of ammonia can be adjusted manually.

f) Consider installing a fire sprinkler system with dikes to contain and absorb ammonia vapor if it is released.

g) Change the current open DAP storage tank to a closed DAP storage tank with vent so that spillage of chemicals can be prevented.

h) Introduce earthing system to reduce static and its associated ignition source.

3) Precautionary measure

a) HAZMAT kit should be placed near the operation site to contain spillage. Inspection of this kit is also necessary.

b) Technical manual kit should be kept near the operating system so that the operator can know how to operate the system safely.

c) Regular training and refresher courses should be provided to on-site operators to avoid human error as much as possible. On-site operators should also be informed of any updates on the operating system.

d) Personal protective equipment (goggles, gloves, etc.) should be made available.

e) Good housekeeping should be applied, such as ventilation after operating hours.

A detailed evacuation plan should be prepared and drills should be conducted regularly.

2.4 Deviation 3

Process Section: Phosphoric Acid Storage Tank

Design Intention: Safely contain acid feed at ambient temperature and pressure

Guide Word: Low

Process Parameter: Concentration

Deviation: Low Concentration

2.4.1 Causes

1) Equipment Error

a) Valve B might have malfunctioned and was fully open during the loading of phosphorus acid with the reaction still on-going in the reaction vessel. This would cause a backflow of mixture from the reactor into the phosphorus acid storage tank and subsequently dilute the acid in the tank.

b) The tank and pipe may not have been sealed properly and tightly after loading and unloading, causing the phosphorus acid in the tank to be diluted and contaminated.

c) An overloaded DAP reactor can cause backflow to happen even when valve B is closed. Overloading of the DAP reactor could possibly be due to high flow in the ammonia feed line. Backflow of mixture is undesirable as dilution of phosphorus acid in the tank will take place

2) Human Error (Upstream Flow)

a) Feed is of poor quality due to improper handling and preparation of phosphorus acid. Phosphorus acid might have already been contaminated or diluted before being fed into the tank.

b) Desired concentration of Phosphorus acid was miscalculated, causing the feed entering the storage tank to be out of specification.

c) The well-prepared phosphoric acid might be diluted or contaminated during loading into the storage tank due to improper or untimely cleaning of storage tank.

3) Unused Phosphoric Acid of lower concentration

a) The previous batch’s unused phosphoric acid of lower concentration might be recycled and reused and mixed with the new batch of phosphoric acid. This will then result in the deviation from the desired concentration of phosphoric acid.

2.4.2 Consequences

1) Poor Product Performance

a) Since the concentration of Phosphorus acid is lower, the rate of production of DAP will be slower. This also means that the reaction mixture in the DAP reactor is not in stoichiometric ratio which leads to excess unreacted ammonia that contaminates the product. TAP may be formed instead of DAP due to the unreacted ammonia.

2) Flashing of ammonia leading to hazardous release at vessel outlet

a) After the reaction has ended, each batch of DAP product is collected through the use of earth’s gravity into the DAP product tank by opening Valve C. However, due to the presence of unreacted ammonia in the reactor vessel, the ammonia would flash the moment it exits the reactor with the rest of the products. This flashing of ammonia would bring about an increase in reactor pressure and consequently, the liquid product DAP would be forced out of the pipe and be released as a high pressurized liquid. This could result in splashing and the products would spill out of the tank.

b) The ammonia vapor released could be inhaled and prolonged exposure and inhalation of the vapor would compromise the on-site workers’ health and safety.

c) Vaporized ammonia could also bring about a fire hazard because it mixes with air to form a combustible mixture. This combustible mixture acts as an ignition source and an explosion could occur which would endanger the lives of many on-site technicians as well as cause structural damage to the plant and equipments. Any dependent cascading operations would be affected and cease as well.

3) Flashing of DAP inside reaction vessel

a) The unreacted ammonia vapor may flash inside the reaction vessel as well and cause a pressure build-up within the vessel. This pressure build-up would cause the vessel to rupture and release the DAP products and unreacted ammonia into the surrounding. As previously mentioned, the release of unreacted ammonia vapour could compromise the health and safety of plant technicians, cause structural damage and possibly lead to an explosion.

4) Health and Environmental Hazard

a) Ammonia is a colorless and toxic gas which is highly soluble in water to form liquid ammonia when it is released into the surroundings. Inhalation of large amounts of ammonia or ingestion of food contaminated with large amounts of ammonia may result in ammonia poisoning. The release of ammonia into water bodies poses as a threat to the aquatic life as well

2.4.3 Safeguard

Pressure release on storage mitigates pressure hazard

2.4.4 Action

1) Personal Protective Equipment (PPE)

a) All on-site technicians and employees should be enforced to put on their PPE such as goggles, face shields and chemical resistant gloves at all times while they are on site.

2) Compulsory Training Courses

a) Compulsory training courses must be provided to all on-site technicians and employees to acquire the proper set of skills to operate the equipments and familiarize themselves with the safety procedures in the case of an emergency.

b) The operators should also familiarize themselves with the safety practices of handling and storing various chemicals.

c) All employees on-site should be familiar with the proper evacuation procedure, route and evacuation zone.

3) Scheduled Maintenance

a) Since Valve B is the valve controlling the flow of Phosphoric acid into the reactor vessel, proper maintenance and inspection should be scheduled regularly to ensure that the valve is working fine and that it is perfectly sealed when closed.

b) Regular checks on the storage tank, feed line and unloading line of the phosphoric acid should be performed to ensure that there is no contamination and impurities present.

4) Sensors and Detection

a) Install a composition analyser in both ammonia and phosphoric acid lines to ensure that the reactants entering the reaction vessel have the exact stoichoimetric ratio required.

b) Sensors that could detect the presence of ammonia vapor due to leakage could be installed together with an alarm system so that the workers on-site could be alerted early that there is a leakage.

c) A sensor that could detect the deviation in phosphoric acid concentration could be installed on the phosphoric acid feed line. On top of this, some device or procedure could be put in place to shut down the entire operation when it detects a deviation in the phosphoric acid concentration from its desired value. This would ensure that phosphoric acid concentration below the desired value would not enter the reaction vessel and dilute the reaction mixture.

d) Use a sealed DAP reactor vessel to prevent the release of ammonia gas into the surrounding.

5)Dilution

a) A sprinkler system with dikes could be set-up to absorb as much ammonia vapor in the case when there is flashing of ammonia.

6) Neutralizing Agents

a) Neutralizing agents are effective in treating in spillage of chemicals and should be made available to all employees in the workplace.

7) Fire Sprinkler System

a) Automatic fire sprinkler system should be installed in the workplace, where the detection of a certain amount of heat or smoke would trigger the system to release water at high velocity through the sprinklers.

8) Earthing

a) Earthing or grounding the vessel can remove the static build up on the surface of the vessel to prevent explosion.

9) Ventilation System

a) Methods of ventilation such as elephant trunks and blower fans could be installed in the workplace to allow fast removal or dilution of ammonia vapor concentration in air to a safe level below ERPG-1 (25ppm).

2.5 Deviation 4 (Loss of Agitation)

Process Section: DAP reactor

Design Intention: Contain the reaction at x??C and y psig

Guide Word: Loss

Process Parameter: Agitation

Deviation: Loss of agitation

2.5.1 Causes

1) Electrical Failure

a) A short circuit causes the electrical power outage, which will damage the electric transmission lines.

b) The motor may not work due to the lack of driving force power, which the mixing job of reactants inside the mixer will not be functioning well.

2) Human Error

a) The mixer may not work properly due to operator’s oversight.

3) Mechanic failure

a) Accumulation of undesired materials on the surface of the impeller causes damaged or broken impeller.

b) Agitator mechanical linkage failed to operate or function properly.

c) The impeller may lose its efficiency due to fouled or jammed.

4) Up Stream Error

a) Impurities in the feed could cause damage to the stirrer in the DAP reactor.

2.5.2 Consequences

1) Low quality of the DAP production performance

a) The reactants in the reactor will not be well mixed. Some reactants will be left un-reacted due to non-homogenous reaction mixture.

b) The quality of the product will be poor as DAP output is contaminated with un-reacted reactant.

2) Tank destruction

a) The situation of the hot spot reaction may occur. The high enthalpy of reaction may cause materials to be heated to extremely high temperatures. This may cause the tank wall integrity to be compromised blow expected lifetime from cooling or expansion.

b) With the un-reacted phosphoric acid in the DAP reactor, it will pass through valves C and D then will be released together with the product DAP into the DAP storage tank. The acidity from the phosphoric acid would accelerate corrosion in both valves C and D, as well as the DAP storage tank.

c) The un-reacted ammonia liquid will flash into vapor within the reactor, then end up to the phosphoric acid feed line and react with phosphoric acid in the storage tank. As a result, the damage to the tank will occur due to the high enthalpy of reactions between the reactants.

3) Waste of raw materials

a) Large amount of un-reacted reactant will accumulate inside the DAP storage tank.

4) Hazardous release at vessel outlet

a) At the end of the reaction time, DAP product will be collected and contained in the DAP storage tank through an earth gravity feed. However, due to the un-reacted ammonia in the reactor vessel, ammonia would flash into vapor when it exits the reactor. These flashing will cause a sudden increase in reactor pressure, forcing liquid product DAP out as a high pressured liquid, which could spill out of the tank.

b) The ammonia vapor could also be posed as a fire hazard. It will become as a combustible mixture when vaporized ammonia is mixed with air. An explosion could occur if the ignition source exists.

5) Environmental hazard

a) Ammonia is a highly water soluble, colourless and toxic gas which could be easily dissolved in the water and form liquid ammonia to be released to the surrounding. The ammonia solution will be inhaled by surrounding unsuspecting plant technicians. This would pose as a health risk to workplace.

2.5.3 Safeguards

None

2.5.4 Actions

1) Maintenance

a) Periodic inspection on the impeller to mak sure it in its optimum working condition and there is no crack and structural damages, as well as no undesired materials on the surface of the impeller.

b) Regular check on the DAP rector vessel, outlet and feed line to ensure there is no cracks.

c) Proper cleaning should be carried out to make sure there are no solid contaminants in the storage tank and feed lines of the reactants.

2) Training

a) Compulsory technical and professional training courses must be offered to the operators so they could acquire new skills and update existing skills to become more familiar with the procedures.

b) Safe practices for handling and storing the chemicals must be strictly followed by the operators.

c) All the individuals in the workplace should be familiarize with the proper evacuation procedure and the location of the evacuation zone.

3) Personal Protective Equipment

a) Personal protective equipment such as goggles, rubber gloves, face shields and etc. should be available in the workplace.

4) Equipment and Detection

a) A sensor which can detect the loss of agitation within the reactor should be installed.

b) Backup power generator may be installed to provide the backup power during power outage.

c) A tachometer could estimate the angular velocity of the motor and detect the sudden impeller failure. Further inspection could then be carried out by technicians.

5) Neutralization

a) Neutralizing agents should be made readily available to all the individuals in the workplace to treat spillage.

6) Ventilation

a) Installation of ventilation systems will allows quick removal or dilution of ammonia concentration to a level well below ERPG-1 (25ppm).

7) Earthing

a) Grounding the tank can be done to remove static, which is responsible for causing the explosion.

3. Suggested Modifications to Current System

In order to improve the safety of working in the given system based on the four deviations given, the common recommended actions that can be taken are summarized as follows:

Preventive Measures a) Trainings must be provided to operators.

b) Personal protective equipment must be made available in the workplace.

c) Maintenance and inspection on hardware equipment must be conducted on a regular basis. Damaged and faulty parts must be notified and replaced immediately.

d) Good housekeeping must be practiced by all people who work in the plant.

e) Equipment must be grounded at all times to prevent fire and explosion.

f) Sensor and alarm system should be installed to detect harmful concentration of chemical in the air.

Mitigation Measures a) Ventilation system should be installed to lower concentration of chemicals.

b) Neutralization kit should be accessible at the site of operation.

c) Install sprinkler system for dilution of chemicals in the event of leakage.

d) Firefighting equipment should be available at the site of operation.

e) Evacuation plan should be drawn up to minimize in the event of an accident.

4. Conclusion

5. Reference

Natural or organic solidarity

Emile Durkheim, a French anthropologist in the late nineteenth and mid twentieth centuries, became absolutely fascinated with the thought of ‘what is it that binds human beings together in communities both large and small’ (Crandall 9). To answer this inquiry, he created the idea of social solidarity. This was to be made out of two subdivisions: natural or organic solidarity, taking into account the thought of common reliance and dependence on others to make a community function, and mechanical solidarity, in which every group produces and contributes what is important for survival (Crandall 9-11). Emile Durkheim’s social solidarity applies to numerous societies. We can see this illustrated in Elizabeth Fernea’s Guests of the Sheik. In the city of El Nahra, Iraq, social hindrances between the many groups’including the sheiks family, the vagabonds, and the womenfolk’make it troublesome for these groups of people to interact efficiently. This solidarity made it troublesome for Fernea to make genuine associations and relationships with these people, butt through a cognizant push to respect social conventions and traditions, she finds herself able to unite with them.

In the ethnography, Guests of the Sheik composed by Elizabeth Fernea, the Sheiks family, particularly the ladies, are a pervasive group inside the town of El Nahra. These ladies, additionally portrayed as the group of concubines, are all the wives and girls of the Sheik. Living respectively in a little compound, they illlustrate Emile Durkheim’s rule of mechanical solidarity. The ladies do not depend on each other to give everything key to survival in light of the fact that they are given these necessities through the Sheik and other individuals in the town. Rather, family ties, and their marriage to the Sheik predic

ament them. This solidarity was not picked by the women seeing that they didn’t decide to be bound together as a family, subsequently debilitating the union. However, they have been able to work together and cooperate.

The relationship of these women is rather complex, where age and wisdom do not determine the hierarchy. The Sheik leads the harem, however the female with the most power among them is Selma, who received this position by being most loved wife of the Sheik. The other women show their respect for Selma. Her leadership within the group is illustrated when Fernea is ushered in by the women when she first visits the harem. She was led by the women to Selma, where she was finally greeted (Fernea 28). This illustration diagrams the fundamental structure of the harem, and relates to Durkheim’s model of mechanical solidarity, as there is generally a leader in a group who makes the rules. This unity creates a force that keeps the family together, fortifying the solidarity between the women.

Women in El Nahra make up their own social group. Because of social convictions, ladies are not permitted to associate with obscure men, making a social hindrance between the sexes. Fernea does not comprehend this at the beginning, and is irritated when a lady pulls her abayah over her face as she and her spouse pass. She inquired as to whether this was on account of the women feeling that she had been hostile, and Bob clarified, ”The women always seem to cover their faces quickly when caught unawares by strange men’ (Fernea 25). The abayah that they wear create barriers between the men and the women. A black veil is a social custom and is an indication of humility and ethical quality. ”They say an uncovered woman is an immoral woman’. The tribesmen ask why a woman would want to show herself to anyone but her husband” (Fernea 6). This adds to other viewpoints that ceaselessly separate men and women from interacting, limiting social relationships between the genders.

The women of El Nahra initially saw Fernea as an oddity, and are to a great degree keen on her. She is

the first western woman most of them have seen or connected with. However, a little while later this excitement starts to wear off and it is clear that the ladies are not intrigued or inspired with Fernea. They are just interested about her western way of life. This is obvious through their refusal to drink her tea when they visit her home, them refusing to eat her bread, and their comments about her inability to do laundry (Fernea 77). This dismissal of an offering from a host is greatly rude in Iraqi culture, and demonstrates that not just do the ladies not have a security with Fernea, yet they additionally don’t regard her or take her endeavors to join their way of life genuinely. This irritates Fernea, and her feelings are shown when she states, ‘I felt hurt. They did not find me sympathetic or interesting or even human, but only amusing as a performing member of another species’ (Fernea 77).

Another social boundary that keeps Fernea from uniting with the women, language barrier. The local tongue of the populace of Iraq is Arabic, and Fernea just started to study Arabic a year before moving to El Nahra. She is not ready to comprehend what individuals are stating and it is difficult for her to communicate. Despite the fact that these circumstances are casual and appropriate for the blending of Fernea with the women of the town, her constrained learning of the dialect makes it difficult to feel included in the discussion. This makes Fernea feel uninvolved, and frequently hurt. This social barrier pushes her far from needing to bond with the women and thwarts the interaction between them.

The gypsies are another group depicted in the ethnography Guests of the Sheik by Elizabeth Fernea. From the outside the gypsy convoy looks incredibly glitzy and flashy as they parade through Iraq with their ‘bright clothing and gaily saddled animals’ (Fernea 57). On the other hand, once seen from within, it is evident that they are battling and troubled. At the point when Fernea, her spouse, and Razzak go to meet the gypsies they were exhausted, filthy, cold, and sick.

This gathering is sure to each other on the reason of mechanical solidarity, as they don’t each have a specific aptitude that is germane to manag

ing the troupe, yet they are bound by conviction and family ties. It is clear that this solidarity is extremely powerless when Fatima asks Abdul Razzak “why he didn’t bring hashish so she can forget her troubles” (Fernea 61). This demonstrates a craving to escape the world she is living in. A solid solidarity will even now have its inconveniences, however those inside the group won’t have any desire to escape it. Along these lines it is clear that their bonds to each other are decaying. The gypsies are dependent upon the populace of the town for cash to survive when they perform for them, accordingly making a detached natural solidarity between the populace of El Nahra and the gyspsy troupe. This solidarity is essential to the wanderers, yet not indispensable to the townspeople, making it a fragile relationship. The wanderers are always voyaging and migrating, so they depend on different towns for remuneration, and in this manner would not be totally devastated if their solidarity with the populace of El Nahra was disassembled.

Fernea had a difficult time building relationships with the majority of the gatherings due to the social boundaries displayed, however as her time in El Nahra carried on, she found herself able to leap forward some of these social obstructions and assemble fellowships. As her insight into the Arabic dialect enhanced so did her association with the women. Laila clarified later that when Fernea initially touched base in El Nahra the women thought about whether she was hard of hearing, moronic, or not knowledgeable in light of the fact that she would seldom react in discussion. As Fernea’s opportunity in Iraq passed, her Arabic enhanced and Laila advised Fernea she had sprung up and that her ‘company had improved immensely’ (Fernea 135). Having the capacity to speak with the ladies in the teasing way that they entertained one another permitted Fernea the opportunity to be included with the ladies and comprehend them, hence starting to construct a relationship.

The ladies of El Nahra anticipated that Fernea would demonstrate her value as a lady before she was acknowledged by them and seen as an equivalent. She discovers that to b

e acknowledged by the ladies she needs to reveal to them she can perform the errands of a suitable wife. Restricted in which Fernea performs this is through figuring out how to legitimately cook rice to the gauges of the Iraqi ladies. Through the assistance of a couple of the ladies from the town, she realizes this ability and performs it when cooking lunch for the Sheik. As news voyages rapidly in El Nahra, all the ladies were soon mindful of the supper that Fernea had set herself up for the Sheik and the other tribesmen, and that it was a vast feast, as well as a dinner that was all that much delighted in by the tribesmen. This demonstrated the ladies that Fernea was considering their way of life important and was a decent wife.

Despite the fact that it is not standard for the ladies of El Nahra to take up with men, Fernea has the capacity add to an association with Mohammad, the young man who was tasked with serving Fernea and her spouse. This companionship was structured more rapidly than Fernea’s association with the ladies of the town in light of the fact that she was compelled to figure out how to correspond with Mohammad from the time she touched base in El Nahra, despite the fact that this at first was troublesome and obliged rehashing words and utilizing hand signals. She was reliant on him to get basic needs and perform different errands, thus she saw him consistently, subsequently building up the relationship at a fast pace. It is obvious that Fernea has an association with Mohammad through her looking for his endorsement in the wake of serving the Sheik and the other tribesmen, and her recognizing his assistance in setting up the feast.

They groups depicted in the ethnography Guests of the Sheik by Elizabeth Fernea epitomize Emile Durkheim’s standards of natural solidarity and mechanical solidarity. The gypsies, the Sheiks harem, and the women of the town all showcase solidarity inside their sets with different groups. This solidarity was stronger in a few circumstances than others due to variables including how the gathering was shaped and what

made them stay together. There are social hindrances including dialect and social traditions that make social connection between groups troublesome in a few circumstances. These hindrances additionally influence Elizabeth Fernea’s capacity to make earnest associations with people in El Nahra, yet through perseverance in taking in the Arabic dialect and the Iraqi society, she is inevitably ready to associate with both the women of the town and Mohammad and make genuine relationships with them.

The Japanese earthquake and tsunami disaster: essay help online

The earthquake damaged hundreds of thousands of structures and triggered a tsunami that devastated the eastern coast of Japan. The tsunami, in turn, led to level 7 meltdowns in three nuclear reactors at the Fukushima Daiichi Nuclear Power Plant, leading to the evacuation of hundreds of thousands of residents. Natural disasters happen and some, even with preparatory warning can be devastating. All that people and government can do is maximize preparedness to manage and assume the risks of natural disasters. (Goldberg, 2013) When a data threat has been confirmed then the effect and size of the threat must be evaluated to determine which response is necessary. In making a response certain factors must be included such as the individuals, stake holders, and the type of data that has been breached. These include structural and social preparedness and these two are always part of the main concerns in relation to risk management of natural disasters. When Hurricane Katrina hit on 23 August 2005, it hit New Orleans with such force and the city, although leveed, easily filled up like a bath tub – much of the city, almost all low-lying was inundated by rain and high winds that flooding soon happened, trapping people in their homes making rescue difficult. Those who managed to seek shelter in government-prepared evacuation centers soon found themselves homeless as, even as the storm dissipated, New Orleans remained flooded and the life that they knew was lost to many.

Herded to welfare housing hastily set up by the government, most looked forward and waited for a successful rebuilding of their city so that they can live their lives out, as before once again. meanwhile, looting went on in the abandoned and flooded parts of the city while the US Corps of engineers scrambled to fix the levees, all this happening under the glare of the media side by side with grim images from New Orleans and stories of tragedy and survival. Blame in the handling of the mishap and the red tape that prevented relief came under heavy scrutiny and FEMA’s methods and the way local, state and Federal governments handled the crisis were under fire. So was there really an implementation problem? If so, what were they?

#2 question (www.tnema.org)The emergency plan by the State Emergency Operations Center was developed because of the growing concerns of with keeping a safety, assessing roads and conditions, monitoring the weather reports and minimizing the effect of an emergency events in the state of Tennessee (www.tnema.org). As part of a bigger comprehensive plan linked to vary other specialist situations. Its importance lies in the fact that in the last decade natural disasters in American has increased due to varied factors. (www.tnema.org)This plan focuses on a prevention associated with natural disasters, but, in order to prepare concerned staff, agencies and offices for their role and appropriate action should an emergency arise the state of Tennessee has the National Guard units available(www.tnema.org). (www.tnema.org) Emergency Support Function are structured by elements and basic contained annexes structure. (www.tnema.org) An evacuation, movement & logistics plans of evacuating from varied points of the area in case of said situation arising must already be in place (www.tnema.org). A group of individual will be in charge of such a situation putting in plan movement in and out of particular open spaces of a number of individuals to large groups. The emergency plan must be laid out plan it should be available any given time in case of emergency arising. (www.tnema.org) In terms of logistics and movement of equipment as well as responders on site, a similar plan involving movement to and from varied points. (www.tnema.org) Data access will be override of security protocols for said information access must be put in place.

Body

Vulnerability to the effects of a disaster arises from a combination of factors, including physical proximity to a threat (e.g., living in a floodplain), the characteristics of the home (including construction and ownership), lack of a political voice, financial constraints, and choices made by an individual. It is widely accepted that high-risk groups vulnerable to a disaster include those with lower incomes, the very young and the elderly, the disabled, women living alone, and female-headed households. elevation of their homes (Race and class are certainly factors that help explain the social vulnerability in the South, while ethnicity plays an additional role in many cities. When the middle classes (both White and Black) abandon a city, the disparities between the very rich and the very poor expand. Add to this an increasing elderly population, the homeless, transients (including tourists), and other special needs populations, and the prospects for evacuating a city during times of emergencies becomes a daunting challenge for most American cities. Employment opportunities were limited for inner city residents as jobs moved outward from the central city to suburban locations, or overseas as the process of globalization reduced even further the number of low skilled jobs

The proliferation of firsthand accounts of victims and witnesses in text (newspaper articles, published stories, etc.) and in oral accounts via the media and the continuing investigation of independent agencies, academic groups as well as the government itself show a story of human tragedy that depict the initial response of the government as weak and mostly ineffective. FEMA’s initial response and the militarization of the relief & rescue effort created a picture of a government losing its grip on the effect of the disaster, coordination missing. The questions “Who is in-charge?” and “What’s being done?” volleyed in the minds of Americans across the nation, gripped by the images and stories of disaster. Many trooped to New Orleans to ‘help’ but found no one to talk or work with about a ‘coordinated effort’ to get the victims out of the Dome and give them initial Housing and Food relief. The unprecedented cataclysm that became of New Orleans when Katrina hit resulted to the first evacuation of a major US city (ordered by Mayor Ray Nagin); those who cannot escape the city trooped to refugee centers like the Superdome who soon —- together with the compounded problems of rescue, floods and destruction became an unmanageable sea of human trauma. Outside reports of looting, rape, murder and theft were lodged, but in the first week the police were undermanned and powerless resulting to the militarization of the management of the city. To process the massive number of refugees required an effort unprecedented in U.S. history.

Faced with a city in ruins, military management was seen as necessary but this had a negative social impact in terms of management of relief & rescue operations as well as the lack of coordinated efforts Americans expect from their government. Essential facilities such as hospitals, transportation hubs, and communication systems were severely damaged by the earthquake and this hampered the delivery of humanitarian aid. Due to the widespread damage to infrastructure and overwhelming number of casualties, the Haitian government was unable to respond in a timely manner. The country’s morgues were overloaded and people had to be buried in mass graves. There was an outpouring of refugees to the neighboring Dominican Republic, which accepted refugees temporarily but itself did not have sufficient supplies of food or medicine.

Although there was a significant amount of international aid from governments, including $48 billion from the United States, as well as international aid agencies such as the Red Cross and Catholic Relief Services, there were problems with recovery efforts, in part due to health problems such as a cholera epidemic, and problems inherent in Haiti such as complexity of land tenure laws, government inaction, and indecision among donor countries. It was also noted that there was an inadequate response to needs on the ground, with millions spent on advertisements promoting hand washing in areas without water or soap available.

Three strategies to prepare for future disasters include community preparedness, emergency public information and warning, and information sharing. Community preparedness involves identifying potential health risks, building community partnerships and engaging public health, medical, and social organizations, and coordinating training in preparedness efforts.

The Satyam scam and corruption in India

Chapter 1: Objectives:

I. An introduction about the scams and corruption.

II. Studying in detail about Satyam scam.

III. Impact of Satyam scam on Indian economy.

IV. Steps for preventing and eradicating corruption in India.

V. Chapter 2: Review of literature:

The studies documenting performance effect of board composition are inconclusive; grounding on the agency theory prescription of board independence, the study of Baysinger and Butler (1985) indicates that board composition in terms of the proportion of outside (independent) directors has a lagged effect on organizational performance. ‘Firms with more independent board in the early periods not only enjoyed better performance later on but seem to have enjoyed an advantage over firms with less independent boards.’ Ira M. Millstein, a stout proponent of ‘independent board’ conducts a study with Paul W. MacAvoy (Millstein and MacAvoy, 1998) which demonstrates a substantial and statistically significant correlation between an active, independent board and superior corporate performance. Rosenstein and Wyatt (1997) research reports that announcement of outside director appointment is associated with a 46Vignettes of Research significant excess return of 0.20% and that announcement of inside director appointment is associated with an insignificant excess return. The literature advocating the inside directors on the board of directors of company on the ground of ‘better’ knowledge about the company and the industry, is equally profuse. Barnhart, (1994) investigate the effect of board composition on overall corporate performance and reported performance to be negatively related to the proportion of outside directors. Similar results of negative correlation between the fraction of outside directors and Tobin’s Q are obtained by Agarwal and Knoeber (1996). The work of Bhagat and Black (2001) is often cited in the literature. The study reports that firms experiencing poor performance tend to appoint more outside directors but that does not lead to an improvement in performance. There are several other studies which find no relationship between board composition and firm performance. Baysinger and Hoskisson (1990) report no link between board composition and performance when both relate to the same year. Hermalin and Weisbach (1991) find no strong relationship between percentage of outside directors and firm performance. The study concludes that ‘if such a relationship does exist, it is small with little economic significance’. Empirical studies on the corporate governance in India are few. This is because despite a long corporate history, corporate governance assumed importance in India only in 1993 with the opening up of the Indian economy. Post liberalization, a few research studies have been conducted investigating the relationship between one or more variable of corporate governance and financial performance of India companies. Most of these studies capture the impact of share ownership (managerial ownwership or institutional shareholding including by the foreign institutional investors) on th performance of firms. Noteworthy among these are the studies of Chhibber and Majumdar, (1996); Khanna and Palepu, (2000); Sarkar and Sarkar,(2000); and Patilbandla, (2006). In short, the extant literature on the subject, mainly US based does not give clear evidence on the performance effect of independent directors. An attempt is made in this study to enrich the literature by documenting evidence from India.

Das-Gupta (2007), Basu (2011), Bardhan (1997, 2005), and Quah (2008) along

bring a compact and virtually comprehensive survey of corruption in Asian country, a subject

on that there’s an awfully restricted tutorial literature. Das-Gupta (2007) makes a

distinction between powerful bribes and voluntary bribes. powerful bribes are what

Basu (2011) calls ‘harassment bribes’, bribes to be obtained what’s AN title or

what a politician is absolute to do as a part of their duties anyway. Voluntary bribes refer

to bribes for favors, like the award of a license or a contract. Powerful bribes

include bribes for refraining from victimization power to cause hurt, as an example, bribes to

the police or to tax officers to urge a refund. Powerful bribes profit bribe-takers solely,

while voluntary bribes create the bribe-taker and bribe-giver partners in crime at the

expense of the pecuniary resource and also the general public, as well as those deprived of equal opportunity to compete for contracts and licenses.

Das-Gupta (2007) cites a Transparency International survey of 2002 that ranks

the following seven government agencies in decreasing order of corruption: police,

judicial services, land administration, education, tax, and health services. Bribes to

the police are paid to avoid harassment. A big finding is that official

corruption payments were paid on to officers and to not middlemen, and

mostly to officer-level employees and to not subordinate employees. Associated with the presence

of official corruption, there’s additionally large-scale corruption in government

recruitment, and postings, and transfers to ‘lucrative’ positions, those during which

coercive bribes may be extracted. The speed of bribes ranges from 10’20 % of

the legal sums concerned for varied services.

It was found that companies tend to know what rate they have to pay in bribes for various favours and this is built into their cost calculations. As for ‘speed money’ promoting efficiency within a regulated economy, Bardhan (1997, 1323) and Banerjee (1994) point out that officials cause administrative delays and red tape to increase their capacity to extract bribes.

A second major form of corruption is large-scale or ‘grand’ corruption in the form of huge bribes on major government contracts, particularly on large imports of arms, an inherently non-transparent area, subject to national security considerations; bulk commodities; large infrastructure contracts; allocations of natural resources, such as minerals; or the telecom spectrum, all of which are controlled by politicians in certain key economic ministries with carefully selected bureaucrats colluding with them.

A third major form of corruption is direct theft of government funds from development programs such as irrigation and roads, from social and anti-poverty programs, from publicly funded loans to the poor, and the diversification of price-controlled goods, that are in short supply, for sales at higher market rates. These involve both bureaucratic and political corruption and overlap with cultivating electoral constituencies. This form of corruption, that is, direct misuse of government funds and materials, take place down to the village-level.

The causes of bureaucratic corruption are a combination of discretionary regulatory powers along with very weak monitoring and accountability mechanisms, the latter being deliberately designed to be weak in many programs.

Chapter 3: Introduction:

3.1 Definition of Scam:

Dishonest behavior by those in positions of power, such as managers or government officials. Corruption can include giving or accepting bribes or inappropriate gifts, double dealing, under-the-table transactions, manipulating elections, diverting funds, laundering money and defrauding investors. One example of corruption in the world of finance would be an investment manager who is actually running a Fraud scheme.

The crime of giving or receiving money, gifts, a better job in exchange for doing something dishonest or illegal.

When someone who has the power or authority uses it in a dishonest or illegal way to get money or any advantage.

Prevention of corruption-

To prevent corruption in the financial services industry, Chartered Financial Analysts and other financial professionals are required to adhere to a code of ethics and avoid situations that could create a conflict of interest. Engaging in corrupt behavior could result in job loss and revocation of a professional designation, such as the CFA title. Other penalties for being found guilty of corruption include fines, imprisonment and a damaged reputation.

Top 10 Corruption Scams in India-

1) Indian Coal Allocation Scam ‘ 2012 ‘ Size 1.86L Crore

2) 2G Spectrum Scam ‘ 2008 ‘ 1.76 L Crore

3) Wakf Board Land Scam ‘ 2012 ‘ 1.5-2L Crore

4) Commonwealth Games Scam ‘ 2010 ‘ 70,000 Crore

5) Telgi Scam ‘ 2002 – 20,000 Crore

6) Satyam Scam ‘ 2009 ‘ 14,000 Crore

7) Bofors Scam ‘ 1980s & 90s ‘ 100 to 200 Crore

8) The Fodder Scam ‘ 1990s – 1,000 Crore

9) The Hawala Scandal ‘ 1990-91 ‘ 100 Crore

10) Harshad Mehta & Ketan Parekh Stock Market Scam ‘1992 ‘ 5000 Crore Combined.

Chapter 4: Research methodology

Research Methodology: The process used to collect information and data for the purpose of making business decisions. The methodology may include publication research, interviews, surveys and other research techniques and could include both present and historical information.

Data collection method: The way facts about a program and its outcomes are amassed. Data collection methods often used in program evaluations include literature search, file review, natural observations, surveys, expert opinion, and case studies. There are two types of Data collection methods:-

Primary Data – Data collected by an evaluation team specifically for the evaluation study.

Secondary Data – Data collected and recorded by another (usually earlier) person or

organization, usually for different purposes than the current evaluation.

Hereby, the current research is conducted through the information received from various books, journals, newspapers, magazines and internet sources and the key findings are given below

Chapter 5: Key Findings:

About Satyam scam-

SATYAM COMPUTER SERVICE LIMITED

‘ Satyam Computer Services Limited was founded in 1987 by Mr. Ramalinga Raju.

‘ The company offers consulting and information technology services spanning various sectors, including engineering and product development, supply chain management, client relationship management, business process management and business intelligence.

‘ The company was listed with New York stock exchange, National stock exchange, and the Mumbai stock exchange. In June 2009, the company unveiled its new brand identity ‘Mahindra Satyam’

‘ Satyam Scam ‘ 2008 – 14000 crores

B Ramalinga Raju, the disgraced chairman of Satyam Computers Services Ltd, along with 13 individuals and entities including Chintalapati Srinivasa Raju of iLabs, made Rs 2,000 crore in illegal wealth in the Satyam scam. As financial frauds go, the one perpetrated by Raju & Co was quite uncomplicated. Satyam’s top management simply cooked the company’s books by overstating its revenues, profit margins and profits for every single quarter over a period of five years, from 2003 to 2008. Not for them complex methods like derivatives accounting or off-balance sheet transactions that were used by Enron’s executives.

Keen to project a perpetually rosy picture of the company to investors, employees and analysts, the Rajus manipulated Satyam’s books so that it appeared to be a far bigger enterprise than it actually was. To achieve this, they sewed up deals with fictitious clients, had large teams working on these pet ‘projects’ of the chairman, and introduced over 7,000 fake invoices into the company’s computer systems to record sales that simply didn’t exist. For good measure, profits too were padded up to show healthy margins.

Over the years, these ghostly clients understandably never paid their bills, leading to a big hole in Satyam’s balance sheet. The hole was plugged by inflating the debtors (dues from clients) in the balance sheet and forging bank statements to show a mountain of cash and bank balances.

After several years of such manipulation, Satyam was reporting sales of over ‘5200 crore in 2008-09, when it was in reality making about ‘4100 crore. Its operating profit margins were shown at 24 per cent when they were actually at 3 per cent and its handsome profits on paper covered up for real-life losses. It was when the company ran out of cash (of the real variety) to pay salaries that Ramalinga Raju decided that he couldn’t ride the tiger any longer and made his confession.

Full story

‘ Ramalinga Raju founded Satyam Computers in 1987 and was its Chairman until January 7, 2009 when he resigned from the Satyam board after admitting to cheating six million shareholders.

‘ After being held in Hyderabad’s Chanchalguda jail on charges including cheating, embezzlement and insider trading,

‘ Raju was granted bail on 25.3.2011. Raju was granted bail on condition that he should report to the local police station once a day and that he shouldn’t attempt to tamper with the current evidence

‘ A botched acquisition attempt involving Maytas in December 2008 led to a plunge in the share price of Satyam.

‘ In January 2009, Raju indicated that Satyam’s accounts had been falsified over a number of years.

‘ He admitted to an accounting dupery to the tune of 7000 crore rupees or 1.5 Billion US Dollars and resigned from the Satyam board on January 7, 2009.

What crime he has constituted( Ramalinga Raju)?

1. Raju and his brother, B Rama Raju, were arrested by the Andhra Pradesh police on charges of breach of trust, conspiracy, cheating, falsification of records.

2. Raju has mislead various investors.

3. Raju had also used dummy accounts to trade in Satyam’s shares.

4. He has violated the insider trading norm.

5. Funds from Satyam were diverted to Maytas

6. On 22 January 2009, CID told in court that the actual number of employees is only 40,000 and not 53,000 as reported earlier and that Mr. Raju had been allegedly withdrawing INR 20 crore rupees every month for paying these 13,000 non-existent employees.

7. In Venture Global Engineering vs Satyam Computer Services Ltd 2010 (8) SCC 660 On being questioned by criminal investigation department of the Andhra Pradesh police, Mr. Raju reportedly admitted to using Satyam (respondent no.1) money for buying prime land in and around Hyderabad.

8. Ten Imaginary fixed deposits Raju admitted that Satyam’s fixed deposits which supposedly grew from Rs. 3.35 crore in 1998-99 to a massive Rs. 3320.19 crore in 2007-08 are all fake.

Reasons for Satyam Scam-

1. Raju wanted to take over his MAYTAS INFRA and MAYTAS PROPERTIES.(company of his sons).

2. He was blamed that he was using the funds of the investors for the family business.

3. World bank had banned the satyam to take any services for 8 years (due to illegal profit and lack of essential document).

HOW THIS SCAM HAS RELATION WITH ‘MAYTAS’-

‘ Maytas refers to a group of companies founded by B. Ramalinga Raju. It includes Maytas Properties and Maytas Infra Limited.

‘ A property development company founded in 2005.

‘ Maytas Infra Limited: An infrastructure development, construction and project management company. Maytas Infra was originally run by Satyam Computer Services founder B Ramalinga Raju.

‘ It came under the scanner due to its association with B. Ramalinga Raju.

‘ Various agencies, including the state Crime Investigation Department, probed the Maytas affair after B Ramalinga Raju admitted to serious financial scam in Satyam Computer.

‘ There were allegations that funds from Satyam were diverted to Maytas, causing the Government agencies to verify the infrastructure company’s records as well.

‘ In August 2009, Infrastructure Leasing & Financial Services replaced B Ramalinga Raju as promoter of Maytas Infra.

‘Society’s Reaction’-

‘ The people of his native village, Garagaparru, hail the development works undertaken by the Raju Foundation, the charitable arm of Satyam.

‘ The Citizens for a Better Public Transport in Hyderabad (CBPTH) demanded a CBI inquiry into the process of how Maytas bagged the Hyderabad Metro Rail project.

‘ Analysts in India have termed the Satyam scandal India’s own Enron scandal.

‘ Some social commentators see it more as a part of a broader problem relating to India’s caste-based, family-owned corporate environment.

Role of auditors, in light of Satyam scam-

This fraud was not committed overnight; it was building up continuously from over years. The role of Satyam’s auditors is under scanner. They ignored some of the obvious indications of embezzlement and thus failed to catch on the massive scam, which could have been caught much before it acquired the ‘massive’ status.

CONSEQUENCES-

‘ Before the scandal its share price was Rs 300 in oct 2008. Just after this scandal the share price goes down to Rs 6.30.

‘ On 10 January 2009, the Company Law Board decided to bar the current board of Satyam from functioning.

‘ Bank of America and State Farm Insurance terminated its engagement with the company.

‘ SEBI, the stock market regulator, also said that, if found guilty, its license to work in India may be revoked.

‘ The New York Stock Exchange has halted trading in Satyam stock

‘ India’s National Stock Exchange has announced that it will remove Satyam from its S&P CNX Nifty 50-share index.

‘ Satyam’s shares fell to 11.50 rupees on 10 January 2009, their lowest level since March 1998, compared to a high of 544 rupees in 2008

‘ Satyam was the 2008 winner of the coveted Golden Peacock Award for Corporate Governance under Risk Management and Compliance Issues, which was stripped from them in the aftermath of the scandal.

‘ Present time its share price is 107.89. Mahendra Satyam’s market growth is 7,800crore.

‘ Before the scandal Satyam was the 4th ranked among IT companies of India and on 9th jan2009 it became least valuable IT company in India.

IMPACT OF SATYAM SCAM ON INDIAN ECONOMY-

‘ Although several companies are trying to have a bite into Satyam Computers, according to Gartner study, the company is likely to exist in its current form. It is expected to discontinue some of its businesses, service lines or cease to exist in certain geographies.

‘ Huge losses to investors aside, the Satyam scandal has caused ‘serious damage’ to India Inc’s reputation as well as the country’s regulatory authorities outside.

‘ The Government certainly cannot remain aloof and allow Satyam to die off especially when it provides occupation to 53,000 odd people and indirectly supports more than a million Indians.

‘ The Satyam scam effect has started its infectious presence. U.S. listed stocks of other Indian companies have started taken a severe beating.

‘ Indian firms are looking into methods to avoid scenarios of such scams within their companies.

Chapter 6: Suggestions:

5 ways to reduce corruption:

We are all aware of the term ‘corruption; and do a lot of discussion on how to control it. It may be defined as the misuse of the public office for one’s own advantage or for the purpose of any other illegal benefits by a public servant. According to sec. 2(c) (vii) of Prevention of Corruption Act, 1988, a public servant is, ‘any person who holds an office by virtue of which he is authorized or required to perform any public duty’. But in reality this has been seen to be abused. Every person has his/her own dignity and some fundamental rights. Corruption also affects the fundamental rights. How? It has been seen that a common (general) man fears to go to police station even when his/her right(s) is violated. Why? Because he/she does not want to face a series questioning of the police. Normally, what actually happens is, a person when goes to a police station for FIR, he/she has to struggle a lot and face a lot in order lodge his/her FIR in the order of it being properly written. If therefore suggests that we need such a body that could provide us a corruption free society. As per Prevention of Corruption Act, 1988. Sec. 13 (1) d (ii) of POC Act, ‘a public servant is said to commit the offence of criminal misconduct, if he, by abusing his position as a public servant, obtains for himself or for any other person any valuable thing or pecuniary advantage’. This seems to be omnipresent be it in any police department or medical department and even in judiciary.

The question again arises how to control this increasing corruption in our country? There are several bodies that are working for a corruption free system. Here are suggested some of the tools to reduce the corruption;

‘ The first tool is ‘education’. With the help of education we can reduce corruption. According to a survey conducted by India today the least corrupt state is Kerala, the reason being that in Kerala literacy rate is highest in India. So we can see how education effects education. In most of the states, normally a fairly large number of people are uneducated. Those who are uneducated do not know about the process, provisions and procedures through which they can get justice. Corrupt public servants try to make fool of them and often demands for bribe. It is due to unawareness in the field of law, public rights and procedures thereof that a common and an uneducated suffer out of the corrupt society. This suggests that if we are educated, we can understand our rights well.

‘ We need to change the government processes. If the members of the governing body are government officials, there will certainly be less reports of the criminal cases. The reverse may be possible only when there are no more criminal politician in our government. The provision is that, if there is any case filed against a person then he would not be eligible for election. But if we see hundred politicians then about sixty percent of them would be criminal in nature. If these criminal politicians command us and make laws, what types of law would be formed, we can guess! Thus during election, we should keep in mind the person for whom we shall not vote. In India there is a provision that no person as a criminal shall be allowed as a Member of Parliament or member of legislative. Unfortunately a fairly large number of them are a part of it.

‘ We can reduce corruption by increasing direct contact between government and the governed. E-governance could help a lot towards this direction. In a conference on, ‘Effects of Good Governance and Human Rights’ organized by National Human Right Commission, A. P. J. Abdul Kalam gave an example of Delhi metro rail system and online railway reservation as good governance and said that all the lower courts should follow the explanation of the Supreme Court and High Court and make the judgments online. Similarly, SivrajPatil said that the Right to information should be used for transparency. We have legal rights to know any information. According to this act, (Right to Information act 2005), generally people should follow the procedure of law given to then when their work is not being implemented in a proper way in public services. This act is a great help in the order to control corruption.

‘ Lack of effective corruption treatment is another reason. That means, instruments which are in use, are not running properly. For example Prevention of Corruption Act 1988 came into force on 9th September, 1988. But corruption is still flourishing. Why? Because of weak actions and proceedings towards corrupt people. People don’t have any fear of this act and the court. The act may thus be revised for its better implementation.

‘ Lack of transparency and professional accountability is yet another big reason. We should be honest to ourselves. Until and unless we will not be honest, we can’t control corruption. If each of us is honest towards our profession, then corruption will automatically decrease. We need to pay attention towards professional accountability i.e., how much we are faithful and truthful towards our profession. Corruption may be controlled by handling five major professions: lekhpal, medical, revenue, police and judicial.

Chapter 7: Conclusion:

There is a much better grasp today of the extent to which corruption is asymptom of fundamental institutional weaknesses. Instead of tackling such a symptom with narrow intervention designed to ‘eliminate’ it, increasingly it is understood that the approach ought to address a broad set of fundamental institutional determinants. However, the challenge of integrating this understanding with participatory process has barely begun. The implementation of institutional reforms can benefit significantly from the participatory process that is being developed for anti-corruption activities. Equally important, any participatory process, however sophisticated, ought to lead to concrete results beyond enhanced participation and heightened awareness.

Thus, identifying key institutional reforms in India, and mobilizing support for such reforms, needs to be fully integrated into the participatory process from very early on. Such early convergence is likely to promote a better balance between prevention and enforcement measures in addressing corruption. Until recently, the pendulum was firmly in the ‘enforcement’ corner. The gradual swing towards middle ground has taken place due to recognition of the limitations of ex post legalistic enforcement measures, since rule of law institutions themselves are currently part of the corruption problem in India.

The problems that foreign workers face in a host country: college essay help

According to the latest figures CSO (Central Statistical Office), the number of foreign workers in Mauritius has been constantly increasing and is now at approximately 39, 032. From those figures we can state that there are 27, 408 men and 11 624 women. This amount of expatriates is mainly made up of workers coming from Bangladesh, 18 429, from India, 9105, China, 4656 and Madagascar with 3596.

It is the manufacturing sector that employs the largest number of foreign workers, that is29 846, while the construction comes second with 6070 workers. Last September, the ministry of Labour took the decision to freeze the recruitment of foreign workers in the construction sector. In all case, the bar of 40 000 foreign workers in Mauritius will quickly be reached by the end of the year announcing an increase of 20% compared to 2008. Those workers are supposed to be treated the same way as local workers and to take advantage of the local welfare.. Though Mauritius did not signed ICRMW (International Convention on the protection of the Rights of All Migrant Workers), the country needs to apply its own laws. Here, it is the Employees Rights Act (2008) which stipulates all the law concerning any work related issues. This causes the migrant workers to rebel in order to voice out through violent actions. Those persons are actually migrating to another country in order to get a better living conditions, also, promises are made but rarely respected. From the point of view of management, they prefer employing migrants as they are more hard-working, skilled and cheaper compared to a local worker. It is a fact that it is not always easy for the foreign workers to cope with the working conditions applied to them and moreover their cultures and those that we have in Mauritius are not always the same.

RESEARCH OBJECTIVES

The research objectives of this study are:

‘ To explore the different difficulties that the expatriates face in the host country

‘ To explore the foreign workers’ opinions about the way they are treated

‘ Propose recommendations to those organizations who employ foreign workers to improve their working and living conditions.

LITERATURE REVIEW

Wilson & Dalton (1998) describe expatriates as, ‘those who work in a country or culture other than their own.’

Connerly et al. (2008) stated that ‘many scholars have proposed that personal characteristics predict whether individuals will succeed on their expatriate assignment.’ Due to globalization, there is the need for expatriates. Thus, companies dealing with external workers should find ways to solve problems faced by these workers and make them comfortable in their daily life so that they do not want to go back to their native country. (Selmer and Leung, 2002)

PROBLEMS FACED BY EXPATRIATES

1. Culture Shock

Hofstede (2001) defined it as ‘the state of distress following the transfer of a person to an unfamiliar environment which may also be accompanied by physical symptoms’.

According to Dr Kalvero Oberg, expatriates are bound to experience four distinct phases before adapting themselves to another culture.

Those four phases are:

1. A Honeymoon Phase

2. The Negotiation Phase

3. An Adjustment Phase

4. A Reverse Culture Shock

During the Honeymoon Phase, the expatriates are excited to discover their new environment. They are ready to lay aside minor problems in order to learn new things. But eventually, this stage ends somehow.

At the Negotiation Phase or the Crisis Period, the expatriates start feeling homesick and things start to become a burden for them. For example, they might feel discomfort regarding the local language, the public transport systems or the legal procedures of the host country.

Then the Adjustment Phase starts. Six to twelve months after arriving in the new country, most expatriates start to feel accustomed to their new home and know what to expect. Their activities become routines and the host country is now accepted as another place to live. The foreign worker starts to develop problem-solving skills to change their negative attitudes to a more positive one.

The final stage called the Reverse Culture Shock or Re-entry Shock. It occurs when expatriates return to their home country after a long period. They are surprised to find themselves encountering cultural difficulties.

There are physical and psychological symptoms of culture shock such as:

1. Physical factors

‘ Loss of appetite

‘ Digestion problems

2. Cognitive factors

‘ Feeling of isolation/ home sickness

‘ Accusing the host culture for own distress

3- Behavioural factors

‘ Performance deficits

‘ Higher alcohol consumption

2. Communication/ Language Barrier

Communication is crucial to both management and employees. Sometimes, due to this language barrier, employees do not understand what is expected from them. They thus tend to make mistakes at the workplace and conflicts arise between the parties concerned. Also, the language barrier is the major obstacle when it comes to the changing of environment for expatriates. These people usually feel homesick and lonely as they are unable to communicate with the local people they meet. This language problem becomes a barrier in creating new relationships. A special attention must be paid to one’s specific body language signs, conversation tone and linguistic nuances and customs. Communicative ability permits cultural development through interaction with other individuals. Language becomes the mean that promotes the development of culture. Language affects and reflects culture just as culture affects and reflects what is encoded in language. Language learners may be subconsciously influenced by the culture of the learned language. This helps them to feel more comfortable and at ease in the host country.

3. Industrial Laws

Laws are vital for the proper conduct of such activity as that of welcoming expatriates in Mauritius. The Ministry of Labour, Industrial Relations and employment are meant to produce a ‘Guidelines for work permit application (February 2014)’ manual for those organisations which are engaged in this activity. This manual describes procedures that should be followed in case of a Bangladeshi, Chinese or Indian worker. Any breach of the law will lead automatically to severe sanctions.

For example, Non-Citizens (Employment Restriction) Act 1973 which provides, among others, that ‘a non-citizen shall not engage in any occupation in Mauritius for reward or profit or be employed in Mauritius unless there is in force in relation to him a valid work permit.’ A request to the government should be made if an organization wishes to recruit foreign workers in bulk.

Expatriates are human beings so they should have some fundamental ‘Rights’ in the host country. In Mauritius, the contract for employment of foreign workers stipulates all the necessary information concerning the expatriates’ rights, conditions of work, accommodation and remuneration amongst others. This contract is based on the existing Labour law and the contents of the contract are to large extent the same with some slight differences in conditions of work. Mauritius has adopted the good practices in relation to labour migration and has spared no efforts to develop policies and programmes to maximize benefits and minimize its negative consequences. However, there are still improvements to be brought to the living and working conditions of foreign workers.

4. Living Conditions

The NESC published a report in February 2007 which advocated that foreigners working in the island should enjoy the same rights as local workers. In reality, many foreign workers suffer from bad working conditions. Some workers have intolerable living conditions, sleeping in dormitories on benches without mattresses or in tiny bedroom containing many people. Those who intend to voice out or those considered as ‘ring leaders’ are deported.

In 2006, some workers from China and India who tried to form a trade union or to protest were deported. Peaceful demonstrations often turn into riots which the police brutally suppressed.

In August (2007) some 500 Sri Lankans were demanding better wages at the company Tropic Knits and in response, the Mauritian authorities deported 35 of them. At the Companie Mauricienne de Textile, one of the biggest companies of the island employing more than 5,000 people, 177 foreign workers were deported after taking part in an ‘illegal demonstration’ about the lack of running water, the insufficient number of toilets and poor accommodation.

During the year (2011) a visit at two of the Trend Clothing’s Ltd by (Jeppe Blumensaat Rasmussen) shows how several Bangladeshi workers were living under inhumane living conditions. Furthermore, workers were paid an hourly rate of Rs15.50, which was even less than the previous amount of Rs16.57 bringing the monthly salary to an amount of Rs3500 ‘ 5000 depending on overtime. One woman even said that she had work 43 hours and was paid only 32. The Bangladeshi workers were also living in dormitories containing several holes in the ceilings and signs of water damage next to electrical sockets. We also have the case of the migrant Nepal’s workers who decided to leave Mauritius due to their bad working and living conditions. They were living in an old production space which was turn into dormitories, hosting 34 migrants’ workers. There were no water connection in the kitchen and no running water to flush toilets. They also did not receive any allowance and their salary was decreased from Rs5600 to Rs5036 per month. On the 9th June 2011, eight workers wrote a letter to their boss, giving their employer a month notice.

In September (2013) more than 450 Bangladeshi workers working in the textile company, Real Garments, in Pointe-aux-Sables were on strike claiming for better working conditions. They also protested in the streets of Port-Louis and had gone to the Ministry of Labor to submit their claims one day before (L’Express; Mauritius). Fourteen Bangladeshi workers were considered as the main leaders and were deported by the authorities.

5. Foreign workers and Income

Foreign workers take the decision to leave their native country to work in other countries with the aim of making more money and then send it to their family. However, they did not predict that they will be paid less than that was promised to them before their departure to the host country. Some organisations pay them only half of what they were supposed to. Having already signed their contract, they are forced to work hard for a low salary. Many Bangladeshis, Indians or Chinese choose to leave the host country after their contract termination and the more courageous ones stay and renew their contract for more years. Despite their low paid jobs, it is still better than in their native country where they are even more exploited or where life is far more difficult for them.

The Employment Rights Act (2008) stipulate that if a local worker ‘works on a public holiday, he shall be remunerated at twice the national rate per hour for every hour of work performed.’

However, some expatriates who are forced to work on a public holiday are usually paid the same amount as a usual day. As human beings, they should be treated as any other worker either local or foreign, with the same rights and possibilities.

6. Unions

Unionism is about workers standing together to improve their situation, and to help others. Some unions are reactive, that is waiting for the employer to act and then choosing how to attack or respond and others are proactive, that is developing their own agenda and then advancing it wherever it’s possible. When unions and management fail to reach agreement, or where relations break down, the union has the option of pursuing industrial action through a strike, a go-slow, a work-to-rule, a slow-down, an overtime ban or an occupation.

However, the Expatriates are often not aware of the law protecting their rights (e.g. many migrant workers are not informed of the laws that provide them with the same level of protection as Mauritian nationals and Employers refused to recognize union representatives. It is also often difficult for unions to get access to and organize for foreign workers.

An ICFTU-AFRO (the African regional organization of the former International Confederation of Free Trade Unions) mission to Mauritius in February 2004 was told that the few men they saw were mainly supervisors who were said to be hostile to unions.

During 2006 there were a series of reports that workers from China and India who had tried to form a trade union or protest against their employers had been summarily deported. On 23 May 2006, policemen armed with shields and truncheons beat female workers from Novel Garments holding a sit-in in the courtyard of the factory in Coromandel protesting against plans to transfer them to other production units.

According to the MAURITIUS 2012 HUMAN RIGHTS REPORT, section 7: Worker Rights, a. Freedom of Association and the Right to collective Bargaining, the constitution and law provide for the rights of workers, including foreign workers, to form and join independent unions, conduct legal strikes, and bargain collectively With the exception of police, the Special Mobile Force, and persons in government services who were not executive officials, workers were free to form and join unions and to organize in all sectors, including in the Export Oriented Enterprises (EOE), formerly known as the Export Processing Zone.

Asthma: college admission essay help

INTRODUCTION

Asthma is a common chronic respiratory disease with a global prevalence of more than 200 million. It is a heterogeneous disease identified by reversible airflow obstruction, bronchial hyperresponsiveness (BHR) and inflammation. Treatment of inhaled corticosteroids (ICS) and/or combination of long acting ??-agonist (LABA) may deviate in dosage depending on the measure of severity among patients. Asthma can also be classified into two categories; extrinsic (atopic) and intrinsic (non-atopic) asthma (Fahy, 2014). For this case study, I would be discussing some characteristics of asthma, diagnosis and treatment recommendations and current research in stratified medicine.

ETIOLOGY

Atopic asthma is triggered by environmental stimuli such as allergens (e.g. pollen, pet hair, dust mites etc.), air pollution, weather change and childhood exposure to tobacco smoke. Less than 15% of children with continuous wheezing would develop asthma in adolescent while others with eczema, obesity, atopic rhinitis and dermatitis are at higher risk of developing asthma. These comorbidities may complicate asthma management in adulthood (Subbarao, 2009). Furthermore, there is a higher prevalence of asthma among boys than girls, and a higher incidence among women than men, due to hormonal factors. Boys generally undergo asthma remission as a result of enhanced lung development and airway growth (Spahn, 2008) whereas hormonal influences could affect asthma control in pregnancy (Padmanabhan, 2014).

Other risk factors which could affect the immune system in new-onset asthma are exercise, emotional stress and occupational exposure to chemical substances such as paint, hair dyes, cleaning liquids and the use of marijuana. Viral infections affecting the lungs in childhood (e.g. bronchiolitis) could also affect airway epithelial cells, thus resulting in the development of T-helper (TH2) related asthma (Fahy, 2014).

PATHOPHYSIOLOGY

Asthma is characterized by a cumulative loss of lung function over time. Changes to airway structure and composition such as thickening of basement membrane, increased bronchial vascularity, smooth muscle hyperplasia and hypertrophy and goblet cell hyperplasia, which leads to mucous hypersecretion, also promotes to airflow obstruction. This is known as airway remodelling (Tschumperlin, 2011).

As a result of allergen exposure, inflammatory cells invade airways and releases mediators such as leukotrienes, histamine, cytokines and chemokines triggering bronchoconstriction, airway remodelling and hyper-reactivity, as shown in Table 1 (Padmanabhan, 2014).

SIGNS AND SYMPTOMS

Patients experience wheezing, cough, dyspnea and chest tightness. Symptoms may vary in frequency if treatment is received, depending on their severity and displays hypersensitization to allergens that could trigger exacerbations. The difficulty with this disease is that its symptoms often overlap with other allergies (e.g. allergic rhinitis) making it strenuous to determine the primary cause and relieve symptoms (Padmanabhan, 2014).

DIAGNOSIS

Symptoms that are alleviated by bronchodilators indicates asthma as the underlying cause. Therefore, it’s critical that tests are performed while patients are symptomatic allowing accurate diagnosis.

The age on asthma-onset should also be considered. Although asthma in children and adults share similar characteristics, there are significant differences between them. For example, adult-onset asthma develops sensitization to occupational factors and are often misdiagnosed for COPD or chronic bronchitis (Holgate et al. 2006).

As it is a hereditary disorder, a detailed patient history is required to determine whether there are any signs of family history, atopy or long-term chemical exposure. Asthma displays a decrease in FEV1 (forced expired volume in 1 second) and a reduced FEV1/FVC ratio (0.75) ( 6 )

f_e= 1.88 ‘ ’10’^6 ‘ (t/b)^2 ‘k=[kg/’cm’^2] ( 7 )

f_y = Specified minimum yield point of the material or 72 [%] of the specified minimum ultimate strength whichever is less.

t = Thickness of the web plate reduced by 10 [%] as an allowance for corrosion.

b = Depth of plate panel.

K is a function of different types of loading, aspect ratio and boundary conditions. BigLift is using the following conditions for their buckling analyses.

Figure 2: Load condition Compression and Bending

K corresponding to axial Compression and Bending:

Where a’b ‘1.0=4 ( 8 )

Where a’b 0.75 “then ” f_t= ??_y )'[1- ((3??_y)'(16f_e ))] ( 17 )

In this load case the is f_e'(??_y >0.75 )so f_t is calculated according to equation ( 17 ). These results are used for calculate f_crc:

f_crs= f_t ( 18 )

Then the Compression area is calculated:

A_s= (a+b) ‘t ( 19 )

The force that is actual causing the Compression and Bending is calculated as:

Q=q ‘b ( 20 )

This force is a distributed load on the structure that’s been analysed. Dividing equations ( 20 ) an ( 21 ) gives the stress that occurs.

f_s= Q’A_s ( 21 )

Shear Calculation

The K factor for shear is different than for Compression and Bending. The K factor for Shear needs to be determinate as following:

“If ” a’b >1 “then ” K= ‘3’ [5.34+4 ‘(b’a) ‘^2 ] ( 22 )

“If ” a’b 0.75 “then ” f_t= ??_y )'[1- ((3??_y)'(16f_e ))] ( 26 )

In this load case the is f_e'(??_y >0.75 )so f_t is calculated according to equation ( 26 ). These results are used for calculate f_crc:

f_crs= f_t”3 ( 27 )

Then the shear area is calculated:

A_s=minimum of 2a ‘t or 2b ‘t ( 28 )

The force that is actual causing the shear stress is calculated as:

Q=q ‘b ( 29 )

This force is a distributed load on the structure that’s been analysed. Dividing equations ( 28 ) an ( 29 ) gives the stress that occurs.

f_s= Q’A_s ( 30 )

Conclusion:

In this section the combined calculations of Compression and Bending and Shear are calculating if buckling occurs.

“No Buckling when: ” (f_c’f_crc )^2+'(f_s’f_crs ) ‘^2 1.00 ( 32 )

Finite Element Method

The Finite Element Method (FEM) is a numerical solution of equations (matrixes). In FEM a construction is divided in a finite number of elements. Every element is connected with each other, these connections are called nodes. With the use of matrix equations, it’s possible to calculate the displacement, forces and stresses of these nodes in certain load cases. By using FEM software it’s possible to divide the model in small nodes. Then the program calculates the displacement of every node, which will result in a very careful analysis.

For the complete derivation of the Finite Element Method (Hofman, 1994).

Chapter 5 describes the current buckling calculation according the equation of Euler. But it neglects the surrounding structure and the method. Therefore a more accurate analysis of the complete tanktop structure is necessary. The choice for these analyses is based on the Finite Element Method.

FEM-based tanktop model

In the FEM-based tanktop analyses the model will be exposed on several loads. The analysis that’s been used is the ‘NX Nastran Static and Buckling’. This analysis calculates the model on the static stresses that occur during the load case and on buckling as well. The program calculates a so called: ‘Eigenvalue’, for the buckling situation. This Eigenvalue is a factor for the number of times that the current load could be multiplied before the construction succumbs under buckling.

Step 1

For analysing a similar calculation as BigLift uses in the Excel-sheets, it’s necessary for modelling a plate in FEMAP with the same conditions as used in the Excel calculation. The FEM-calculation reviews the construction based on the mesh. A mesh is an assembly of element. If the elements are small the calculation is more accurate, but it will also result in longer calculation time. For these analyses the mesh size of 50 [mm] by 50 [mm] is used.

Conditions:

Material Steel – [-]

Load Force per Area on Surface 2 [N/mm2]

Constrains Fixed on the Shell – [-]

Analyses NX Nastran Static and Buckling – [-]

Mesh size – 50 [mm]

This will result in several stresses as result from compressing and bending, shear and of course buckling.

The results of this analyses is shown in figure 212

Step 2

Since the FEM analysing of buckling situations is relative new for BigLift Shipping, verifying the results is difficult. Therefore splitting the modelling and analysing was necessary to insure the accuracies of the model. So after verifying that the previous step resulted in a good analyse, the model could be expanded. So stiffeners and other girders where added and the nodes where connected to each other. This resulted in a very basic and local construction of the tanktop. The footprint of the saddle was added as load case. Different values of the load where added to insure the accuracies of the model in different situations.

Conditions:

Material Steel – [-]

Load Force per Area on Surface 2 [N/mm2]

Constrains Fixed on the Shell – [-]

Analyses NX Nastran Static and Buckling – [-]

Mesh size – 50 [mm]

The results of these analyses are shown in figure 221

Step 3

After step 2, the construction of the tanktop is expended. The construction is modelled from frame 52 till frame 58. This is the construction, which according the Distribution calculation sheet of BigLift Shipping, is exposed at the most forces and stresses during sea-going conditions. The footprint of the load case is modelled as a surface with a very high Young’s modules. BigLift Shipping assumes in there calculation that the load is infinitely rigid. With this assumption the relation between the cargo and the construction of the vessel can be ignored. In this final step the analyses calculates the complete construction that’s been modelled. With this step it’s also possible to review the construction that acutely is exposed on the stresses.

Conditions:

Material Steel – [-]

Load Force per Area on Surface 2 [N/mm2]

Constrains Fixed on the Shell – [-]

Analyses NX Nastran Static and Buckling – [-]

Mesh size – 50 [mm]

Analysing calculation results

Implementation

Conclusion

Recommendations

Bibliography

Books

Asmus, K. (2001). Bijzondere Aspecten van de sterkte van Scheepsconstructies.

Hofman, G. E. (1994). Eindige elementen methode: HB Uitgevers.

Lambe, T. W., & Whitman, R. V. (1969). Soil Mechanics: Wiley.

Lewis, E. V. Principles of Naval Architecture (Second Revision), Volume I – Stability and Strength: Society of Naval Architects and Marine Engineers (SNAME).

Okumoto, Y., Takeda, Y., Mano, M., & Okada, T. (2009). Strength Evaluation. In Y. Okumoto, Y. Takeda, M. Mano & T. Okada (Eds.), Design of Ship Hull Structures (pp. 33-80): Springer Berlin Heidelberg.

Taggart, R., Architects, S. o. N., & Engineers, M. (1980). Ship design and construction: Society of Naval Architects and Marine Engineers.

Internet

Biglift. (2015). from http://www.bigliftshipping.com

Chevron. (2015). Wheatstone project. Retrieved 04-03-2015, from http://www.chevronaustralia.com/our-businesses/wheatstone

Rules and Regulations

Rules for Ships, Det Norske Veritas ?? Ch. 1 Sec. 1 (Det Norske Veritas 2009).

Internet Addiction in the Students of Fiji School Of Nursing and Its Impact on Academic Performances: online essay help

1. Introduction

1.1 Background

The world is advancing every day and with that advancement comes along new trends, inventories and creations that has been incorporated into the lifestyles of many people. One of the most astonishing creation of mankind has been the internet. Internet usage as increased drastically in the recent years. Internet is widely used for work, communication, shopping, entertainment and information. However, despite the benefits, the vast increase in internet usage has led to internet addiction in many people. Kim et al (2004), describes internet addiction as a compulsive need to spend a lot of time on the internet to the point where a person’s relationships, work and health suffers. Internet addiction is not only prevalent in developed countries such as South Korea, Japan and USA, where the technology and internet is readily available but also in developing countries such as Malaysia, India, China and even some Pacific Island countries where people recognizing the works of internet, possibly considering it to be a daily lifestyle necessity (Lee et al, 2014). Majority people in the developed countries have access to internet, young old, rich or poor, are frequently online, therefore internet addiction rates in developed countries are higher than developing countries (Wellman & Hogan, 2004). Internet addiction is likely to be present in our society as the number and size of internet uses are increasing per day, however the use of internet is influenced by the socio-economical gap because the poorer folks are not increasing their usage rate in comparison to the wealthier folks (Wellman & Hogan, 2004). It has been known that young adults, mostly college or university students are more likely to go online, then any other population. It has become common amongst the students to be Googling or Facebooking like it’s a daily activity such as eating or sleeping. Young (1996), has stated that addiction to internet is similar to being addicted to drugs, alcohol which ultimately result in academic, social and occupational impairment. The students with internet addiction tend to face severe academic performances since the amount of study time is spent on ‘surfing irrelevant websites, using social media and online gaming’ (Kandell, 1998).

1.2 Problem Statement

Internet addiction may be a new concept for the local societies but that does not mean it is not present. College or university life is known to bring in serious challenges in the lives of many students. Undergoing all these challenges makes the students exposed to the environment influences and one of them being internet addiction. According Mercola (2014), internet addiction or internet use disorder may not yet be defined as a mental disorder under the Statistical Manual of Mental Health Disorders (DSM-5), however many researchers have argued that Internet addiction may be a contributing factor towards or be a borderline addictive disorder. Various countries such as Korea, China and Taiwan has recognize the threat internet addiction brings about as a public health problem and are trying to address the problem (Lee et al, 2014). The purpose of this research is to investigate the existence of internet addictions among the local population of college students, particularly students of Fiji School of Nursing and to explore the relationship of internet addiction and its impact on the academic life of the students. Another reason is that people should be aware of the psychological impairment that is caused by internet addiction especially among the college students.

1.3 Literature Review

According to Byun et al (2008), internet addiction in any individual is assessed through five dimensions, compulsive use, withdrawal, tolerance, interpersonal and health problems, and time management issues. The researchers (Byun et al, 2008) discovered that there is relationship of internet addiction with interpersonal skills and personality and intelligence. It was stated that as an individual’s internet usage or internet addiction increases, then there would be an attention deficit, hyperactivity and impulsiveness increase. The Meta- analysis (Byun et al, 2008) discovered that increasing network capabilities contributes to social isolation and functional impairment of daily activities.

Moving on, Yen et al (2007) states that one of the causes of internet addiction is a result of family factors. Since families play a large role in adolescent development and socialization, family factors are one of the major risk factors for internet addiction. In a quantitative study conducted by Yen et al (2007), it was demonstrated that the negative attitude or behaviours of the families, such as parent- adolescent conflict, lower family function, alcohol use of siblings and abuse, all contribute toward internet addiction in adolescents. This research suggested that internet addiction may be a form of problematic behaviour and ineffective discipline and supervision and poor intrafamily relationships aid in the initiation of problematic behaviours.

On the other hand, Fu et al (2010), recognize internet addiction as a social problem where the people of younger ages are considered to be less self-regulated, coordinated or focused therefore they become more susceptible to media influences so easily. this study (Fu et al,2010) also states that out of 511 participants(students) from 439 households in Japan, 38% of them aged 16 ‘ 24 are categorized to be internet addicted.

One argument placed by Fu et al (2006) is that female students are likely to be addicted to internet. This statement is however, contradicted by Young (1996), Yen et al (2007), Liu & Kuo (2007) and Niemz et al (2005) stating that male students are more likely to be internet addicted. Niemz et al (2005) and Young (1996) state that internet addiction rate is higher in the males as they use internet to fuel their addictions of online gaming, gambling and pornography and unlike females, male find difficulty in admitting that they are facing problems.

Moving on to consequences of internet addiction. In an overview of internet addiction, Murali & George (2006) state that internet addiction affect many aspects of an individual’s life is through interpersonal, social, occupational, psychological and physical aspects. The negative impacts are known to be on the family and social life as the internet addicts tend to neglect regular family, social activities and interests. Also internet addiction also contributes to poor performances in schools and colleges. Psychosocial consequences include loneness, frustration, depression and suicidal tendencies. (Murali & George, 2006)

Further on, the main negative effect of internet addiction of college students lies on the academic performance. According to Akhter (2013), the academic problems by internet addiction include decline in study habits, significant drop in grades, missing classes, being placed on academic probation and poor management of extracurricular activities. Akhter (2013) suggested that university students are considered to be high risk groups of internet addicts because of their available free time, with no parent supervision and a need to escape the tough university life. Akhter (2013) used Young’s internet addiction test to conduct a survey on the undergraduates of National University of science and technology in Pakistan and concluded that 1 in every 10 university student is internet addicted and out of the internet addicted students, 65% of them are likely to fail or drop out of school.

Moving on, another research conducted by Anderson (2001) on college students concluded that excessive internet use of students results in sleep problems and reducing everyday activities which lead to academic failure. From the survey Anderson (2001) discovered that the average time a student is online in a day is for 100 minutes. She also stated that out of all the participants, 16% were confirmed with internet addiction and the total amount of time these students spend online was more 400 minutes. Even Young (1996) states that internet availability does not improve the academic performance of students and 58% of students’ participants in this researched showed signs of decline in study habits and low grades due to internet addiction and out of those participants 43% failed their annual examination.

2. Objectives and Aim

Aims: To find out about the impact of internet addiction on academic performances of the students of Fiji School of Nursing.

Objectives:

‘ To find out the factors associated with the internet addiction in students.

‘ To find out about the direct and indirect impacts of internet addiction on the students.

3. Methodology

3.1 Study type

The study type for this research is a descriptive quantitative study that will be carried out in Fiji School of Nursing. A quantitative study approach will assist in quantifying the relationship internet addiction has with the academic performance of the students. A descriptive study will aid in describing the actual relationship within the variables, and in this case with internet addiction and academic performance. The reason for choosing this study type is that the source of data chosen is questionnaires, as it will make it more effective in gathering data. Questionnaires is the best option for data collection as the students can complete the questions in their free time rather than trying to make time out for interview and such. Another reason questionnaire are best suited for this study is that the focus group of this study is based on a limited number of students.

Study design

This is a prospective cross- sectional study to clarify how internet addiction affects academic performances of students. The reason for choosing a prospective study is because it is an easier and efficient method of gathering data. Prospective study will aid in looking out for the occurrence of internet addiction among the students over a certain period of time and are able to observe the impact of internet addiction on the academic performance of the students. For choosing cross-sectional study, data will be collected from the students of Fiji School of Nursing at one given time and this will be used as the overall picture of the population.

Variables

Some of the variables that the study may include will be:

– The demographical data of students will be: age, gender, and ethnicity

– The number of students that use internet daily.

– The availability of technology to use internet: laptops, tablets, phone, school ICT.

– The number of cases of internet addiction among the students of Fiji school of Nursing.

– The academic performance/ grades of the students of Fiji school of nursing.

– The amount of time students spend studying in a day.

– The amount of data (estimation) a student uses in a month for internet usage.

3.2 Sampling

This study will be carried out in Fiji School of Nursing in Tamavua, Suva, Fiji Islands. A sample of undergraduate students from all the three years, students from each year, year one, year two and year three, of the said school will be recruited in the study. The participants are to be recruited voluntary, with both verbal and written consent given. For this research, stratified sampling technique is being used because from all of the student population of Fiji school of Nursing, year 1, 2, and 3 will be divided into two groups, group one (A) is consistent of students who have access to internet on a daily basis e.g. students with internet phones, laptops and tables. Group two (B) is consistent of students who do not have continuous access to internet on daily basis e.g. students who do not have smartphone or laptops. The questionnaires will be demonstrated amongst only group one (A), the students with access to internet on daily basis.

Inclusion Criteria: Students of Fiji School of Nursing who use internet daily through their internet phones, laptops, tablets or either through school library or ITC (Group A).

Exclusion Criteria: Students who do not use internet daily through smartphones, laptops, tablets, school library or ITC (Group B).

3.3 Plan for data collection

A structured and pre-tested questionnaire will be prepared and distributed among all the three year level of students of Fiji School of Nursing who fall under group one (A) category, those having access to internet on daily basis. The questionnaires will be based on the student’s lifestyle revolving around internet, how often the students use internet, through what means, how often do the students study and does internet usage clashes with their study time and what effect does daily usage of internet have on their grades. The data will be collected from April to June 2015. Informed consent, both oral and written will be taken from the students during the distribution of the questionnaires. The questionnaire will be typed and printed in English and a sample questionnaire will be attached in the annex of the proposal.

3.4 Plan for data processing and analysis

The data will be collected from the structured questionnaire distributes to the students and the questionnaires will be analyze using the Statistical Package for the Social Science (SPSS) for Windows version 21.0 software. SPSS is a programming statistical analysis that is used for managing, analyzing and presenting data.

Confidentiality will be maintained throughout the research, where all the personal data of the participants will be only accessible to the researchers only. The data will be analyzed numerically and quantified. The words will be transformed into quantitative categories through the process of coding. Then the numbers will be analyzed statistically to determine, for instance, the percentage of students facing academic problems due to internet addiction.

3.5 Ethical Consideration.

The ethical approval for carrying out this research will be obtained from the local research committee. Approval will be first send to the Department of research Committee of Fiji school of Nursing and then it will be send to the College of Health Research Ethics Committee and then forwarded to the Fiji National Research Ethics Review Committee. Once the approval for this research is given, a written permission will be asked from the head of School, Fiji School of Nursing, to seek permission to start data collection in the school.

Written and verbal consent will also be gathered from the participants upon distribution of questionnaires. Confidentiality will be maintain and the participants will be assured that no personal data such as name and address will be An information sheet will be given with the consent form, explaining what the research is about, what they are actually asked to participate in, what are the risks for participation and how they got selected to participate the research.

4. Work Plan

Activity M O N T H

March April May June July Aug Sept Oct Nov Dec

Submission of research proposal to DRC

Submission to CHREC

Data collection

Data analysis

Report writing

Dissemination seminar

5. Budget

Expenses Total

1. Data collection and data entry-Research assistant F$80.00

2. Supplies- priniting questionnaires, stationaries, packaging box, F$80.00

3. Telecommunications F$10.00

4. Transport Allownace F$30.00

5. Data Analysis F$80.00

6. Publication F$100.00

Grand Total F$380.00

6. Plan for administration, monitoring, and utilization of results

The work plan will be strictly followed in order to complete the research, data collection and data analysis at the given time. The expenses will be kept within the budget and time and resources will be used wisely. The results will be presented in numeric and percentage form. The results will be presented through bar graphs, pie charts and line graphs. The final report with recommendations will be submitted to the Ministry of Health to the following departments:

‘ Research office, Ministry of Health

‘ Local and Pacific Regional Health research conference

‘ Tutors and students of the Fiji school of Nursing

‘ Presentation in Local Health research conference and symposium

Reference List

Akhter, N. (2013). Relationship between Internet Addiction and Academic Performance among University Undergraduates. Academic Journals, 8(19), 1793-1796. Retrieved from http://www.academicjournals.org/article/article1382342222_Akhter.pdf.

Anderson, K. (2001). Internet Use Among College Students: An Exploratory study. Journal of American Health, 50(1), 20-28. Retrieved from http://faculty.mwsu.edu/psychology/dave.carlston/WritinginPsychology/Internet/2/i5.pdf.

Byun, S., Ruffini, C., Mills, J. E., Douglas, A. C., Niang, M., Stepchenkova, S., Lee, S. K., … Blanton, M. (2009). Internet Addiction: Metasynthesis of 1996’2006 Quantitative Research. CYBERPSYCHOLOGY & BEHAVIOR, 12(2), 203-207. Retrieved from file:///C:/Users/NishantSharma/Downloads/InternetAddictionMetaAnalysis-cpb.2008.0102.pdf.

Fu, K., Chan. W. C,. Wong. C. W. P, & Yip. P.W (2010). Internet addiction: prevalence, discriminant validity and correlates among adolescents in Hong Kong. The British Journal of Psychiatry, 196(6), 486-492. Retrieved on 8315 from http://bjp.rcpsych.org/content/196/6/486.full-text.pdf+html.

KANDELL, J. J. (1998). Internet Addiction on Campus: The Vulnerability of College Students. Cyber Psychology & Behavior, 1(1), 11-17. Retrieved on, 8315 from http://online.liebertpub.com/doi/abs/10.1089/cpb.1998.1.11.

Kim H. S, Chae K. C, Rhim Y. J, Shin Y. M. (2004). Familial Characteristics of Internet Overuse Adolescents. . Korean Association of Medical Journal Editors, 43(6), 733-739. Retrieved on 5315 from http://www.koreamed.org/SearchBasic.php?RID=0055JKNA/2004.43.6.733&DT=1.

Lee, J. Y., Shin, K. M., Cho, S., & Shin, Y. M. (2014). Psychosocial Risk Factors Associated with Internet Addiction in Korea. Psychiatry Investigation, 11(4), 380-386. Retrieved on 8315 from http://synapse.koreamed.org/DOIx.php?id=10.4306/pi.2014.11.4.380.

LIU, C., & KUO, F. (2007). A Study of Internet Addiction through the Lens of the Interpersonal Theory. CYBERPSYCHOLOGY & BEHAVIOR, 10(6), 801-804. Retrieved from http://www.encognitive.com/files/AStudyofInternetAddictionthroughtheLensoftheInterpersonalTheory.pdf.

Mercola. (2014). Internet Addiction is the New Mental Health Disorder. Retrieved March 6, 2015, from http://articles.mercola.com/sites/articles/archive/2012/11/24/internet-addiction.aspx – See more at: http://reffor.us/index.php#sthash.4IuikrZG.dpuf

Murali, V., & George, S. (2006). Lost online: an overview of internet addiction. Advances in Psychiatric Treatment, 13(1), 24-30. Retrieved on 5315 from http://apt.rcpsych.org/content/13/1/24#sec-5.

NIEMZ, K., GRIFFITHS, M., & BANYARD, P. (2005). Prevalence of Pathological Internet Use among University Students and Correlations with Self-Esteem, the General Health Questionnaire (GHQ), and Disinhibition. Cyber Psychology & Behavior, 8(6), 562-568. Retrieved from file:///C:/Users/NishantSharma/Downloads/PathologicalInternetUse_cpb.2005.8.pdf.

Wellman, B., & Hogan, B. (2004). The Immanent Internet. NetLab, Retrieved from, on 8315 http://groups.chass.utoronto.ca/netlab/wp-content/uploads/2012/05/The-Immanent-Internet.pdf.

YEN, J., YEN, C., CHEN, C., CHEN, S., & KO, C. (2007). Family Factors of Internet Addiction and Substance Use Experience in Taiwanese Adolescents. CYBERPSYCHOLOGY & BEHAVIOR, 10(3), 323-329. Retrieved on 8315 from http://ntur.lib.ntu.edu.tw/bitstream/246246/173340/1/16.pdf.

Young K. S. (1996). Internet addiction: the emergence of a new clinical disorder. CyberPsychology & Behavior. 1(3), 237-244. Retrieved on 8315 from, http://www.chabad4israel.org/tznius4israel/newdisorder.pdf

Annex

Questionnaires: Internet Addiction in Students of FSN and its Impact on Academic Achievements

age:

sex: m f

personal status:

student: yes: no:

if yes, what are you studying?

1. How many hours do you spend for surfing in one week?

2. How often do you find that you stay online longer than you intended?

3. How often do you neglect school work to spend more time online?

4. How often do others in your life complain to you about the amount of time

you spend online?

5. How often do your grades or school work suffer because of the amount of

time you spend online?

6. How often do you become defensive or secretive when anyone asks you what

you do online?

7. How often do you block out disturbing thoughts about your life with soothing

thoughts of the Internet?

8. How often do you find yourself anticipating when you will go online again?

9. How often do you fear that life without the Internet would be boring, empty,

and joyless?

10. How often do you snap, yell, or act annoyed if someone bothers you while

you are online?

11. How often do you lose sleep due to late-night log-ins?

12. How often do you try to cut down the amount of time you spend online and

fail?

13. How often do you try to hide how long you’ve been online?

14. How often do you choose to spend more time online over going out with

others?

15. How often do you feel depressed, moody, or nervous when you are off-line,

which goes away once you are back online?

Multicultural competence

In the modern context today, demographic changes are becoming more prominent across the globe and this phenomenon suggests that multicultural counseling is inevitable. Hence, the importance of counselors’ multicultural competence, which refers to the ability to interact effectively to individuals that are culturally of socioeconomically different, cannot be overstated. For this instance, counselor would have to be multicultural competent in order to effectively communicate to client that comes from different culture. With that in mind, this essay I will further discuss the relevance of multicultural competence on the effectiveness of multicultural counseling.

Due to the increasing diversity of culture in the society, individuals seeking help from counselors could show up with various cultural backgrounds. In order to adapt to this arising situation, counselors are required to have an understanding of various way that culture could have an impact on the counseling relationship. For example, a male counselor might be casually greeting his client, who happens to be a female Muslim, with a handshake. Unaware of the Muslims’ cultural background where in males are not permitted to touch or give a handshake to the opposite gender, the counselor is actually performing a forbidden action. This lack of sensitivity to individuals’ cultural background can cause serious consequences, such as, individuals’ refusal to participate, which in turn hinder the development of counseling relationship (Ahmed, Wilson, Henriksen Jr., & Jones, 2011). Therefore, I believe that the counselors’ role have evolved with the diversity of culture, requiring them to be more cultural wary and provide multicultural guidance, in other words, being multicultural competent.

To further elaborate, multicultural competent is the fluency in more than one culture or specifically, whichever culture that the individual is currently in. Moreover, Sue and Sue (2012) defined multicultural competent counselor with three main dimensions. Firstly, being multicultural competence mean to be actively aware of their own assumption about behaviors, values or biases. Secondly, multicultural competence counselor attempts to understand from the perspective of a culturally different client. Lastly, multicultural counselors actively develop and practice intervention strategy when guiding client with different cultural background. All in all, being multicultural competence enables counselor to realize that standard counseling method might not be beneficial to client with different cultural background and also understand that culture is not to be hold accountable for clients’ problem.

However, despite being multicultural competent, cultural groups are not discrete, instead, cultural groups are overlapping. In today’s multicultural context, individuals ultimately acculturate to different culture, which results in a blurring difference between each individual. Furthermore, even if it were possible to classify every client into different subgroup, it would be an insurmountable task to be prepared for every possible client. Therefore, instead of emphasizing on the clients’ differences, Patterson (2004) suggested to employ a universal system that known as the person-centered approach. With reference from the person-centered approach, counselors’ role remains the same, as to assist and guide their client to reach goals and objective, however their focus have been shifted from having proper skill set or technique, being knowledgeable, informative and focuses more on genuinity, empathetic and shows unconditional positive regard (Raskin, Rogers, & Witty, 2008).

Even though it may be impossible to classify every client into a specific subgroup, multicultural training has shown to decrease implicit racial prejudice and increase cultural self-awareness (Castillo, Brossart, Reyes, Conoley, & Phoummarath, 2007). By increasing cultural self-awareness and acknowledging how culture could affect the process of counseling, this will enable the counselor to develop an empathic understanding towards their client. Moreover, by reducing racial prejudice, counselors refrain from judging clients based on their own values or cultural beliefs and also help clients reach goals and objectives without imposing their personal cultural values on clients. Whereas a typical counselor without multicultural competent are seen to be less empathic, lack of cultural-specific knowledge and may even seen as having racial stereotypes or biases (Chang & Berk, 2009). This feeling of mistrust towards the counselor would eventually lead to an undermined counselors’ credibility, which can have an impact on the counseling relationship.

In conclusion, with the numerous studies’ findings, I believe that multicultural competence could improve the effectiveness of counseling. As the society becomes more interconnected through the globalization process, multicultural competent counseling become increasingly important to address the issue that may surface from an array of cultural background. Although it may be difficult to identify which cultural group a particular client belongs to in order to practice specific intervention, being multicultural competent have shown to be generally increasing the effectiveness of counseling by decreasing implicit prejudice and increasing counselors’ self-awareness of personals’ values and clients’ values. Furthermore, focusing on the knowledge and skill set of the counselor, encourage counselor to be more problem-focused rather than emotion-focused, which might cause client to feel that the solution provided were not tailored to their specific life context. Therefore, despite the contrary, counselor who are multicultural competent would still yield benefits in the society today.

Young peoples' perceptions of smoking: college essay help near me

The world Health Organization (WHO 2014) recognises that engaging in risk behaviours, puts you at greater exposure of mortality and morbidity. A risk behaviour has been defined as something that intentionally or unintentionally puts that person at greater risk to themselves, of injury or disease. This essay will look at the risk of smoking in young people, including the health implications, epidemiology and prevalence. An age range of 12-21 year olds will be used when identifying literature. There will be a primary focus on policies and guidance for health improvement in Scotland. In addition legislation and reports from the whole of the UK will support the discussion of health improvement in young people. It will aim to analyse literature to try to determine the reasons why young people smoke, and also consider the rise in social media and electronic cigarettes. Furthermore, it will explore the context of care within schools and the community, and discuss health inequalities. Additionally, this essay will Identify and critique a recent Health improvement Campaign video, aimed at young people. The content and design of the video will be discussed in detail, to analyse the appropriateness for the target age group. Throughout the critique, it will make reference to underpinning models of behaviour change, and health improvement within Scotland and the UK.

Health improvement is at the forefront of Scotland’s current policies and aims. The mission is to build a healthier Scotland, focus on inequalities and develop actions that will improve the overall population health (NHS Scotland 2014a). Policies aim to support everyone in Scotland to live healthier and longer lives, delivered by quality healthcare. The government has set aside national approaches to target underlying causes of poor health, such as smoking (The Scottish Government 2010). The Scottish government recognises that smoking is still one of the leading causes of preventable deaths. It aims to make Scotland a smoke free generation, by targeting health promotion towards young people, in an effort to reduce poor health in later life (The Scottish Government 2008).

In Scotland, a quarter of all deaths are smoking related, with 56,000 people being admitted to hospital each year from smoking related illnesses (ASH 2014). This figure continues to put substantial strain on our national health service (ScotPHO 2012). Smoking increases the risk of Cancers, heart attacks, and stokes. It also worsens and prolongs conditions such as asthma and respiratory diseases (NHS 2013). Early exposure to harmful toxins in tobacco puts you at a greater risk of related cancers. Young smokers are also prone to short and long term respiratory conditions such as wheezing, coughing and phlegm. Girls, in particular, who start smoking at a younger age are 79% more likely to develop bronchitis or emphysema in later life, compared to those who began smoking in adult life (Home Office 2002). The total annual cost of treating smoking related illnesses in Scotland is estimated at around ??409 million. Consequently, one of the NHS Scotland’s heat targets for 2013/2014 was to deliver at least 80,000 successful quits before march 2014 (Gov 2014 Scotland Heat Targets).

A young person is classified by the World health organisation (WHO 2015) and the NHS as those ranging from ages 10 and 24 years old (NHS Health Scotland 2014c). Around 15,000 young people in Scotland start smoking each year (NHS health Scotland 2014b). Although this figure is high, the proportion of young people who have ever smoked, has dropped dramatically by half in the last decade. There has been an improved reduction from 42% of young people in 2003 smoking, to 22% in 2013 (ASH 2014). Evidence has shown that the younger a person begins to smoke, the more likely they are to continue during adulthood. This puts them at increased risk or morbidity and mortality in later life (RCP 2010). It has been discussed that risk behaviours can set life patterns, and similarly have long lasting negative future effects on the persons health and wellbeing (WHO 2015). Most smokers begin smoking before the age of 18, which is why health improvement in young people, is of high importance in the UK (The Information Centre 2006). The UK laws have changed considerably within the last decade in an effort to reduce smoking. In 2007, the legal age a person could purchase tobacco products was increased from the age of 16 to 18 years old (The Secretary of State for Health 2007). One year prior the UK parliament introduced the smoking ban, which prohibited smoking in any public premise (The Secretary of State for Health 2006). The main focus of this legislation, was the protection and health of young people. Early intervention is said to be one of the key areas in reducing mortality and morbidity for young people (Department of health 2013).

More recently, the government have highlighted new concerns about the rise in popularity of electronic cigarettes. There are fears that electronic cigarettes could normalise smoking, thus backtracking on the efforts of the past decade to de-normalise it (Britton and Bogdanovica 2014). There is a real debate on whether electronic cigarettes appeal to young people. Electronic cigarettes come in a variety of exciting flavours such as bubblegum and banana and are marketed in colourful and fun packages, that may be appealing to young people (Public health England 2015). Statistically, however, it has been shown that young people’s use of electronic cigarettes is primarily confined to those who are already experimenting with regular cigarettes (Office For National Statistics 2012). Electronic cigarette use is found to be rare amongst young people who have never smoked before (Ash 2014).

Although statistically the UK and Scotland have shown that smoking in young people seems to be on the decline, it is still clear that a sizable minority of young people still continue to start smoking (Ash 2014). In order to try to campaign for a smoke free nation, it is important to understand the reasons behind why young people smoke. It has been noted that young people are susceptible to what is attractive and risky. Like following fashion, media and the internet, young people want to be in with the crowd. Where you live plays a big role, alongside if your parents or friends smoke. To add to this, positive tobacco advertisement pave the wave for young people to see smoking at exciting and relatively normal (BMA Board of Science 2008). A recent report (Amos et al 2009) summarized their findings on the key reasons young people smoke. These included individual beliefs and self image, social factors such as parents or friends smoking, community factors and ease of access to tobacco. Gough et al (2009) conducted a focus group study. The study invited 87 males and females, aged between 16 and 24 years to talk about reasons for smoking. Although a relatively small study, the focus group found that young people understood smoking to be a rational decision. Although the young people had a very clear awareness of health issues, the majority did not link smoking in young age and health as something to be worried, about until they are ‘older’ (Gough Et al 2009). A larger study in Romania found strong peer influence, alongside lower self-efficiency to be the primary reason for smoking in 13 to 14 year olds (Lotrean 2012). The age range of 13 to 14 year old was not sufficient enough to make a valid argument for the term young people. The latter two of these studies also did not delve much into the connections of youth smoking, being associated with social deprivation. There still continues to still be a strong association of smoking alongside health inequalities. In Scotland it was found that smoking in the most deprived areas equated to 36% of the figures, with only 10% in the least deprived areas (ASH Scotland 2014). Health inequalities is at the heart of public health improvement. The overall health of the public seems to be improving, yet the inequalities of health have worsened and the gap has increased (Health development agency 2005). Other levels of influences noted were price, marketing and promotion, self esteem and values and beliefs (Edwards 2010). A person’s values and beliefs can also play a role in health behaviours.

When looking at health improvement in young people, it is important that everyone working in national and local government, healthcare, social care, and the school and education system all contribute (Department of Health 2012). It has been recognised that school plays a vital role in the education and promotion of young people’s health, to build knowledge of personal wellbeing . School nurses play an important role in health promotion and health education, and can be incredibly valuable members of staff for early intervention. It has been suggested that school nurses, may have a lifelong impact on a young person’s health in adulthood, through early intervention (RCN 2012).Current guidelines dictate that every school must have a no smoking policy. These policies should be widely available, and be visible all over the school so that young people are aware. Schools and school nurses should also support smoking cessation information in partnership with NHS services, and offer help, information and health education to young people on smoking (NICE 2010). However A systematic review of school interventions to stop young people smoking, found no significant effect of interventions in schools to discourage smoking. There was however positive data for interventions which taught young people how to be socially competent, and resist social influences. The strength of this study is the size of the systematic review, which included 134 studies and 428,293 participants. Two authors independently reviewed the data in order to compare and contrast the evidence. On the contrary bias may have been introduced at low level due to the high variability of outcome measures that were used. The trial looked at the age range of five to 18, which only addressed some of the focused age range of youths aged 12-20 (Thomas et al 2013).

Another systematic review and meta-analysis found strong associations between parental and sibling smoking, as a factor for Young people’s uptake of smoking themselves. The analyses confirmed that when young people are exposed to smoke in the household, the chances of them starting smoking themselves are significantly increased (Leonardi-Bee et al 2011). It can be debated therefore that education on health promotion should also start at home, and in communities. The earlier the interventions the more effective it is in preventing health damaging behaviours. Actions need to be taken into a social, environmental and economic level, as well as legislative factors (NICE 2007).

Current UK guidelines advise using a range of strategies to change young people’s perceptions of smoking, and promote health improvement. Resources include posters, leaflets, campaigns and creating new opportunities from arising social media. All of which in an effort to alert young people of the dangers of smoking (NICE 2010). In November 2014 Cancer Research UK launched a UK-wide campaign via YouTube. The video urges young people to use social media, to protest against the tobacco industry (Cancer research 2014). The YouTube video features UK recognised Olympic gold medallist Nicola Adams, and music star wretch 32. The video tells the tobacco industry that young people are no longer puppets on a string and will not be contributing to their industry profits, which make more than coca cola, McDonalds and Microsoft combined. It invites young people to take a ‘selfie’ giving two fingers up to the tobacco industry and post it via twitter and Facebook. The campaign is also supported by UK tobacco control agency ASH. As of March 2015 the YouTube video has received almost a quarter of a million views in just 4 months.

We are currently in a new digital age where social media, and technology are part of daily life for young people. If the government are serious about reaching out to young people they need to step into the new social media and technology world of young people, and fully embrace it (nicholson 2014). The YouTube video by Cancer research aims to get to the very heart of young people, by doing just that. This resource is accessible and approachable for young people, as users can view in privacy, watch on their mobile phones or with friends. The language in the video is very focused on connecting with young people. The video uses words such as “selfie”, “coca cola”, “McDonalds” and “Hashtag”. These are modern words and brands that most young people will recognise. The video is also empowering and revolutionary with inspiring words such as “connected”, “informed” and “talk back”, thus creating a positive message that our generation is smarter, and makes better choices. “Be a part of it” is a phrase near the end, which creates a feeling of wanting to be part of something, and in a group. Recent social media statistics for the UK (Social Media Today 2014) show Facebook now has 31.5 million users, and Twitter has 15 million users. Social media can provide health promotion opportunities for patients, and be used as a communication tool for nurses. Social media can be incredibly powerful), however as professional nurses we must also adhere to professional boundaries (Farrelly 2014) such as NMC guidance for misuse of the internet (NMC 2009).

In order to change risk behaviours, it has been noted that Key elements for success include using resources that are targeted and tailored to the specific age and gender. Similarly alternative choices to risk behaviours should be given, rather than just simply telling an individual to do something (Health Development Agency 2004). The video is encouraging to young people, as the content has connotations of choice.

The YouTube video is unlike current leaflets and posters promoting anti-smoking messages to young people. Instead of listing shock tactics and diseases associated with smoking, it is instead focusing on trying to make smoking sound ridiculous from a financial point of view. It does this by expressing how much money the tobacco industry makes. The characters in the video are relatable, with varying genders and accents. This broadens the appeal of the campaign, instead of focusing on just one target group. The video also asks viewers to upload a picture of themselves to social media, giving the fingers up to the tobacco industry and using the hashtag #smokethis. This appeals to young people, who use social media nowadays as a way of spreading messages and connecting to others. A leaflet or poster may not have the same effect, as there is no social interaction. A twitter or Facebook post, however, can be shared and viewed worldwide. This enables young people to feel like they have a voice and a sense of empowerment. Social media is powerful. Although the video is about promoting anti-smoking, it may also be accessible for young people to share worldwide in negative forms. The hashtag #smokethis has recently been used showing young people uploading photos of themselves smoking both cigarettes and cannabis, in rebellion.

The world health organisation identified key underlying principles of health promotion in their Health for All and Health 21 movements (WHO 1999). These included equity, empowerment, participation, co-operation and primary care. The #Smokethis campaign encompassed both empowerment and participation. The video encourages participation by social media, and empowerment by standing up for something. A new revised health improvement model by Tannahil (2005) suggests that one of the biggest factors in health promotion is social and economic factors. The video encompasses both by showing how much money is wasted on the tobacco industry. It is also relevant to his earlier model of health improvement where he mentions the importance of health education and prevention. This video is very much preventative, in that it is trying to prevent the uptake of young smokers.

Cancer research have been clever in taking a new social media approach. In 2010, (Jepson et al 2010) did a study. They found that although some evidence suggests that media interventions may be effective in reducing the uptake of smoking in young people, the overall evidence was not strong. It would be interesting to see the findings of a more recent study, given the rise of social media in the last 5 years.

The main theme of the video is to try to recruit 100,000 young people to not start smoking this year. It is estimated that in the whole of the UK, 207,000 young people start smoking each year (Hopkinson 2013). Given this figure, if it were possible to half the amount of young people who start smoking each year, it would have a dramatic impact on the relevance for clinical practice. Not only would it decrease the amount of admissions in GP practices for symptoms such as coughing and wheezing, it would also have an incredible effect on later life hospital admissions such as heart disease, stokes and cancers.

The only negative about this video is that there is also the possibility for young people to share pictures of themselves smoking online, as a rebellious stance. This may influence the views of other young people. Is it still does not address the issue of health inequalities and community factors either, which remains an issue in the background as a reason for smoking. It has been well documented by the 1980 The Black Report to show that those in a lower social class have a higher risk of illness and premature death than those in a higher class. Rates of substance abuse are also higher (Department of Health 1980). As well as health promotion online and UK campaigns, there still needs to be community, social, school and family interventions to tackle those who are less deprived. An example is a study by Bond et all (2011), in which they found residents in disadvantaged areas of Glasgow had higher rates of smoking, and less likelihood of quitting smoking. The study found that area with better housing had better rates of quitting suggesting that your environment plays a key part in your health.

As a whole, the Cancer Research video is inventive, modern and appropriate for the target age range. It is easily accessible and creates discussion and the opportunity to be involved in something. Mass media campaigns can promote health improvement. However, there still needs to be approaches such as family, community and school interventions to address health inequalities and social circumstances affecting behaviour. The statistics have shown a steady decline in young people smoking, which is encouraging. The UK is currently in the process of introducing plain packaging on all tobacco products, in a further effort to discourage people from smoking (Barber and Conway 2015).

Statistically the UK and Scotland show that each year, fewer young people begin smoking. Despite the efforts of the government, legislation and regulations may not always discourage young people from smoking. As the UK prepares for further legislation to introduce plain tobacco packaging, it is evident that it is becoming increasing more difficult for young people to access tobacco. It is indisputable however that social factors, peer pressure and health inequalities continue to be an underlying cause of risk behaviours. There is also some contrasting literature on whether health promotion in schools can discourage young people smoking. Despite this, best practice would suggest early intervention is better than no intervention. Social media is on the rise and is quickly becoming a daily habit for young people. They use is to connect, talk, share and interact. It is constantly changing and requires healthcare professionals to be up to date on it. More recent studies would be beneficial to determine the effect of health improvement interventions via YouTube and Facebook. It could potentially become one of the biggest communication tools, for nurses in future practice when looking to get the heart of young people.

A Report on E-Commerce Industry: essay help site:edu

A Study on Indian E-Commerce Industry

Executive Summary

Following report has been made on the ‘Indian E-commerce Industry’. All the data has been collected from the internet, research papers and surveys. The e-commerce is one of the biggest things that have taken the Indian business by storm. It is creating an entire new economy, which has a huge potential and is fundamentally changing the way businesses are done. It has advantages for both buyers as well as sellers and this win-win situation is at the core of its phenomenal rise. Rising incomes and a greater variety of goods and services that can be bought over the internet is making buying online more attractive and convenient for consumers all over the country.

The report gives information on different aspects of the Indian E-commerce industry. For simplicity it is divided into various chapters.

Chapter 1 ‘ Introduces to the Indian E-commerce industry and defines its importance to the economy. The objective of the study is set in this chapter.

Chapter 2 – Briefs about the Global and Indian scenario. Comparison of U.S and Indian E-commerce

Chapter 3 ‘ Provides a brief insight into the Structure of the industry with the HHI index the Chapter 4 ‘ Has the macro environmental analysis of E-Commerce industry which includes the PESTEL analysis.

Chapter 5 – Briefs the Competitive analysis using Porters 5 Forces.

Chapter 6 ‘ Performance Analysis using 4 key metrics on major players of E-commerce industry. Also included the Internet users traffic comparison on top 10 e-commerce sites in India.

A Study on Indian E-Commerce Industry

7

CHAPTER 1

INTRODUCTION

A Study on Indian E-Commerce Industry

8

1.1 Introduction

Since from few years, Growth of internet has changed the world in terms of expectations of

the consumer and consumer behaviour. Online websites for product purchasing concentrate

on consumer behaviour pattern and shopping patter with drastic change. Company are now a

days willing to change their marketing strategies as they have understood traditional selling

practices won’t work with changing technology world.

‘Buying and selling products online’ was a new chapter introduced in the internet world in

year 1991. Ebay.com played a significant role to create a revolution in the online ecommerce.

Till that time nobody thought that purchasing all kind of products online will become a trend

in world and India will share a good part of success. In the late 1990’s, ReddifShopping.com

and Ebay.com gave a first-hand experience for Indians towards E-retail.

Initially when it use to come for weekly shopping of groceries, clothes, shoes and cosmetics,

majority of the people preferred to get into their cars and drive to the supermarket to get the

groceries, shops or malls for other basic products. Now it is becoming common news that

people even buy their groceries online in India.

1.2 Business Model of E-commerce Industry

A Study on Indian E-Commerce Industry

9

Deals Websites

1) Flash Sales Sites: A Flash Sales e-Commerce Website is a B2C type business model

where the website sells the products directly to the end customers. They normally

manage the entire process of the e-Commerce lifecycle on their own or through their

partners and the consumer make the payment directly to the business owning and

managing the website. The consumers take the benefit of huge discounts, which at

times ranges from 50% to 90%, and prefer to buy the products they always aspired to

own. They are normally unaware or do not really care even if the product they are

buying from such websites are obsolete or no longer in fashion.

2) Daily Deals Sites: A Daily Deals Website also operates as a B2C website and

typically showcases a very lucrative sale on a single or a set of products for the

customers. Unlike the Flash Sales website, such a deal is time bound (usually for a

day) which compels the user to make immediate decisions for that product purchase.

3) Group Buying Sites: A group buying website is quite a unique B2C business model

where the website invites the buyers to buy the products / services at a discounted or

at a wholesale price. Like the Daily Deals site, the products advertised on the group

buying sites are also time bound, but usually not limited to a day.

Online Subscriptions

An online subscription website works in a similar manner like an offline subscription

for any kind of service. Such websites showcase the entire catalogue of subscription

options for the users to choose from and subscribe online.

E-Retailing

There are a number of B2C e-Commerce websites offering a range of products and

services to customers across different brands and categories. Such websites buy the

products from the brands or their distributors and sell to the end customer on market

competitive prices. Though their modus operandi is same like a flash sales website,

but their business objective is to offer the latest products to the end customers online

at the best possible prices.

A Study on Indian E-Commerce Industry

10

Marketplace

Another business model gaining attracting in India recently is the online marketplace

model which enables the buyers get in touch with the sellers and makes a transaction.

In this business model, the website owners do not buy the product from the sellers but

act as mediators to facilitate the entire e-commerce transaction. They do assist the

sellers in various services like payment collection, logistics, etc. but do not prefer to

hold inventory in their own warehouse.

1) C2C Marketplace

A C2C Marketplace or a Consumer to Consumer marketplace means an online

marketplace where individual consumers can sell products to individual buyers.

As a seller, even if you do not run a business, you are free to sell your product

through such marketplace to the end customers.

2) B2C Marketplace

A B2C Marketplace or a Business to Consumer marketplace means an online

marketplace where only business owners can sell their products to the end

customer. The process is more or less the same like that of a C2C Marketplace

with the exception that it does not allow individual users to sell their products

A Study on Indian E-Commerce Industry

11

online. Best example of this kind of a website would be SnapDeal.com which has

now become a B2C Marketplace.

3) B2B Marketplace

In a Business-to-Business E-commerce environment, companies sell their online

goods to other companies without being engaged in sales to consumers. B2B web

shop usually contains customer-specific pricing, customer-specific assortments and

customer-specific discounts. Indiamart.com and TATA groups tatab2b.com are few

popular sites in India.

A Study on Indian E-Commerce Industry

12

Exclusive Brand Stores

This is the latest business model of its kind recently started in India. In this business

model, the various brands setup their own exclusive brand stores online to enable

consumers buy directly from the brand. A few examples of brands operating through

an Exclusive Online Brand Store are Lenovo, Canon, Timex, Sennheiser, HP,

Samsung, Mobilestore.com etc.

1.3 Role of Internet in E-Commerce

Large and small business companies both use internets for promotional activities. Some of

the advantages of E-Commerce are:-

1. Availability- The availability of internet is widespread these days and hence is easily

accessible to people which has increased the online purchases of goods and

commodities.

2. Open to All- Internet open page allows everyone to access and transact from any

global location. Moreover it provides an extension of product choice to the customers

which in turn is not possible at local retail stores, etc .

A Study on Indian E-Commerce Industry

13

3. Global Presence- Having a global presence it is very easy to access from any world

location just by having a laptop and an internet connection. All such happenings have

encouraged E-Commerce as any commodity is just a step away from oneself.

4. Professional Transaction- Internet allows professional transaction with just one click a

professionalism that consists of decision making.

5. Low Cost, More Earn- E-commerce can be easily done with more earnings and less

expenses. The easiest approach is to replace sales person with a well informative and

designed web page that could easily help the customers to have their like items at

their doorsteps with a click of a mouse.

1.4 Changes due to online shopping in India

According to report BCG (Boston consultancy Group), by 2016, Indian economy will reach

$10.8 trillion. In global chart India’s rank is 8th. India ranks top in services and china occupies

top position in goods in terms of export through internet. As of now, June 2014, India has

user base of about 250.2 million. It is estimated that by 2024-26, e-commerce market in India

will be $260 billion.

At an Exponential rate, E-commerce is growing in India. According to recent online retailing

report by Forrester, twenty eight percentage of growth is experienced by every retailer year

after year in 2012. As per the study digital consumers spend alomostaround$1.46 billion on

cyber cafes. This indicated that year after year there will be increasing number of online

users. Consumer behaviour is one of the biggest reasons of e-commerce boom in India.

Major reasons for online shopping growth in India

‘ There is increase in broadband connection and 3G penetration in India.

‘ Much wider range of products are available then what is available in brick and mortar

retailers.

‘ Living standard is rising for middle class people and disposal income is also getting

high.

A Study on Indian E-Commerce Industry

14

‘ Online products are usually available at discount which is less than the products

available in normal shops.

‘ Lack of time and busy lifestyle for offline shopping.

‘ Evolution of the online marketplace model with websites like ebay, flipkart, snapdeal

etc.

‘ Takes less time for purchasing as there is no need to stand in the queues which usually

done in offline shopping.

1.5 Advantages of E-Commerce

‘ With E-Commerce as an alternative shopping is no longer a barrier for the customers

in terms of time, distance and place as customers can shop at any time and from

anywhere they prefer to.

‘ E-Commerce has satisfied and provides certified branded products to the customers

that even typical Indian customers are ready to buy products such as clothes, shoes,

etc. without even touching or trying them for fitness.

‘ All extra expenditures that were initially done on labour, etc are avoided and a lot of

funds get saved.

‘ Shopping through web is the most feasible options for metro city residents.

‘ Transaction time has reduced.

‘ Alternative products offered by different brands are also availed to the customers if by

chance the current product in a particular brand is unavailable.

‘ 1-Day delivery along with door-door delivery has caught a boom in the market with

the evolution of E-Commerce.

1.6 Disadvantages of E-Commerce

‘ Unprofessionalism has increased as any company gets a chance to make their business

portal out of trust or belief.

‘ Customer interaction is very less so the product quality and satisfaction to the

customer remains a matter for concern.

‘ Hackers and crackers are day in and day out searching for a chance to hack into

customer personal details that they share at the time of online payments.

A Study on Indian E-Commerce Industry

15

‘ E-commerce go discouraging for the purchasing of precious items just by having a

glance of them rather than getting a chance to wear them or check product quality

such as jewellery etc.

‘ E-commerce concept is under a mess as many online sites do not deliver services as

promised at the time of placing the order.

‘ Authenticity for the product is untrustworthy.

1.7 OBJECTIVES OF THE STUDY

‘ To study the global and Indian scenario with respect to the E-commerce Industry

‘ To study the structure of the Indian E-commerce industry

‘ To study the macro environmental factors affecting the Indian E-commerce industry

‘ To analyze the industry using Porters 5 Forces

‘ To study the performance of major players in the industry

‘ To compare the U.S and Indian E-commerce industry

‘ To study the future opportunities and prospective growth in the industry

A Study on Indian E-Commerce Industry

16

CHAPTER 2

GLOBAL AND INDIAN

SCENARIO

A Study on Indian E-Commerce Industry

17

2.1 Status of the global e-commerce industry

Middle class in many of the developing countries, including India, is rapidly embracing

online shopping. However, India falls behind not only US, China and Australia in terms of

Internet density, but also countries like Sri Lanka and Pakistan. Sri Lanka has an internet

penetration of 15 percent. Better internet connectivity and the presence of an internet-savvy

customer segment have led to growth of e-commerce in Sri Lanka with an existing market

size of USD 2 billion. Pakistan, with an internet penetration of 15 percent has an existing

market size of consumer e-commerce of USD 4 billion. Incidentally FDI in inventory-based

consumer ecommerce is allowed in both these countries. (IAMAI-KPMG report, September

2013).

A.T. Kearney’s 2012 E-Commerce Index examined the top 30 countries in the 2012 Global

Retail Development Index’ (GRDI). Using 18 infrastructure, regulatory, and retail-specific

variables, the Index ranks the top 10 countries by their e-commerce potential.

Following are some other major findings of the Index:

i) China occupies first place in the Index. The G8 countries (Japan, United States, United

Kingdom, Germany, France, Canada, Russia, and Italy) all fall within the Top 15.

A Study on Indian E-Commerce Industry

18

ii) Developing countries feature prominently in the Index. Developing countries hold 10 of

the 30 spots, including first-placed China. These markets have been able to shortcut the

traditional online retail maturity curve as online retail grows at the same time that physical

retail becomes more organized. Consumers in these markets are fast adopting behaviors

similar to those in more developed countries.

iii) Several “small gems” are making an impact. The rankings include 10 countries with

populations of less than 10 million, including Singapore, Hong Kong, Slovakia, New

Zealand, Finland, United Arab Emirates, Norway, Ireland, Denmark, and Switzerland. These

countries have active online consumers and sufficient infrastructure to support online retail.

iv) India is not ranked. India, the world’s second most populous country at 1.2 billion, does

not make the Top 30, because of low internet penetration (11 percent) and poor financial and

logistical infrastructure compared to other countries.

It is seen that countries making in the top list of the table of e-commerce have required

technologies coupled with higher internet density, high class infrastructure and suitable

regulatory framework. India needs to work on these areas to realize true potential of ecommerce

business in the country.

2.2 Comparison of E-Commerce in US and India

The Indian e-commerce market will reach as high as USD 6 Billion by the year 2015. This

implies that if we compare the e-commerce market with the statistics from the year 2014,

there is a substantial growth of 70%. It is evident that India is slowly becoming like the US in

the area of e-commerce.

It is stated that India has started doing well in the market of e-commerce because of the

growing number of people who have access to mobile Internet. Broadband usage has

increased by about three times in the last two years (2013 & 2014).

A Study on Indian E-Commerce Industry

19

2.2.1 Cash on Delivery (COD) system in India

Cash on Delivery is an Indian thing. US consumers mainly transact online using Credit Cards

or through PayPal. Indians are known to be price sensitive. Even though with heavy

discounting e-retailers are trying to lure them to buy online, consumers are still wary of

making prepaid orders.

The situation is different in US. Even though the discounting games continue, users generally

go with prepayment. This is also because the penetration of credit cards and electronic

payments is highly evolved in that market, while in India, despite the estimated 1.252 billion

strong population of India, only 18.8 million credit cards existed in the country, with

approximately 331 million debit cards till last year. The popularity of CoD is directly

dependent on the trust issues consumers have on the online retailers.

2.2.2 Offers and excitement created among customers

The trend of mega online shopping festivals started in the West. From offline trends of

Macy’s Day parade and Black Friday to taking these festivals online, and creating further

new trends like Cyber Monday.

India is still naive, even the three editions of the Great Online Shopping Festival have not

been able to make the impact they were expected to. The festive season of Diwali is a time

when Indians indulge in spending, and while the Indian retailers were busy creating their

respective versions of ‘mega sales’, US based Amazon was the first to step on this

opportunity with ‘Diwali Dhamaka Sale’. Also, the consumer shopping behaviour is different

in the two countries.

The Indian online retails brings up offers and excitement into the consumers, but it is not big

like how it is in US.

A Study on Indian E-Commerce Industry

20

2.2.3 Type of Product that People Buy Online.

Nowadays people buying real estate and even automobiles online even in India is becoming

common. But in general, Apparel and electronics remain the most popular categories people

buy online, and the same is seen in the US as well. These two categories dominate both the

markets.

An area where India is lacking is online grocery retail. Grocery retail and logistics is highly

evolved in US. In India, it was a taboo, and wasn’t touched for a long time even though the

sentiment was strong. Players are emerging now in this segment, with start-up’s like

BigBasket, LocalBanya etc. and mature players like Reliance Fresh taking the lead.

The local Indian ‘banya’ is still stronger than online stores. The unorganised retail in India is

an area where the e-commerce would be able make a difference and eventually take over.

2.2.4 Logistics and Regulations

The logistics infrastructure is still not up to the mark in India. While India Post is doing a

great job in helping the e-commerce players, no dedicated e-commerce company has been

able to scale up its operations to reach all the postal codes. The penetration of internet in

India is 18% as against 87% in US, is a big hindrance.

When it comes to technology, site crashes, issues in ERP systems etc. are commonly heard

things in India. Whereas in US, the technology is very strong.

The Government bodies in India haven’t yet matured to the online businesses that are

operational. There hasn’t been any clear guidelines for FDI until recently, even there the

Government has decided to treat both online and offline retail alike. Some of the other online

marketplace keeps running into warehousing policy issues. The heavy discounting is another

questionable area, since the Government has clear guidelines for Maximum Retail Price,

there is no lower bar set.

A Study on Indian E-Commerce Industry

21

CHAPTER 3

STRUCTURE OF THE

INDUSRTY

A Study on Indian E-Commerce Industry

22

3.1 HERFINDAHL – HIRSCHMAN INDEX

A commonly accepted measure of market concentration. It is calculated by squaring the

market share of each firm competing in a market, and then summing the resulting numbers.

The HHI number can range from close to zero to 10,000.

The HHI is expressed as: HHI = S1^2 + S2^2 + S3^2 + … + Sn^2 (where Sn is the market

share of the Ith firm) the closer a market is to being a monopoly, the higher the market’s

concentration (and the lower its competition).

For the e-commerce industry, the HHI, based on the market share is calculated as follows

Company Sales(in cr) market

share

HHI index

Flipkart 2846.13 53.89683186 2904.868484

Jabong 202 3.82525044 14.63254093

Myntra 441.58 8.362148958 69.92553519

Snapdeal 830 15.7176132 247.0433646

Amazon 168.99 3.20014392 10.24092111

Ebay 107 2.02624652 4.105674961

naptol 460 8.710966349 75.88093474

Yebhi 120 2.272426004 5.163919944

Yepme 80 1.514950669 2.295075531

bewakoof 25 0.473422084 0.22412847

total sales 5280.7 100 3334.38058

>1800

Highly

concentrated

Source: capitaline.com

The HHI can have a theoretical value ranging from close to zero to 10,000. If there exists

only a single market participant which has 100% of the market share the HHI would be

10,000. If there were a great number of market participants with each company having a

market share of almost 0% then the HHI could be close to zero.

‘ When the HHI value is less than 100, the market is highly competitive.

‘ When the HHI value is between 100 and 1000, the market is said to be not concentrated.

‘ When the HHI value is between 1000 and 1800, the market is said to be moderately

concentrated.

‘ When the HHI value is above 1800, the market is said to be highly concentrated.

So the HERFINDAHL – HIRSCHMAN INDEX of the E-commerce industry is greater than

1800. Hence it is highly concentrated.

A Study on Indian E-Commerce Industry

23

CHAPTER 4

COMPITITIVE ANALYSIS

A Study on Indian E-Commerce Industry

24

4.1 PORTER’S FIVE FORCE ANALYSIS

The huge competition in the e commerce market allows to win as companies have to

keep their prices in check to attract buyers.

Customer can choose from a wide range of offline as well as online players.

Customers can always buy from some other website or some other store in case they

are not satisfied with any one player

Buyers in this industry are customers who purchase products online. Since this

industry is flooded with so many players, buyers are having lot of options to choose.

Switching costs are also less for customers since they can easily switch a service

from one online retail company to other one. Same products will be displayed in

several online retail websites. So, product differentiation is almost low. So, all these

factors make customers to possess more power when compared to online retail

companies.

Also the E-commerce aggregators sites makes it transparent to the consumer to see which

site offers the product in the least price

A Study on Indian E-Commerce Industry

25

4.1.1 Bargaining Power of Buyers

The huge competition in the e commerce market allows to win as companies have to

keep their prices in check to attract buyers.

Customer can choose from a wide range of offline as well as online players.

Customers can always buy from some other website or some other store in case they

are not satisfied with any one player

Buyers in this industry are customers who purchase products online. Since this

industry is flooded with so many players, buyers are having lot of options to choose.

Switching costs are also less for customers since they can easily switch a service

from one online retail company to other one. Same products will be displayed in

several online retail websites. So, product differentiation is almost low. So, all these

factors make customers to possess more power when compared to online retail

companies.

Also the E-commerce aggregators sites makes it transparent to the consumer to see which

site offers the product in the least price

4.1.2 Bargaining Power of suppliers

Tens of millions of sellers list their products on ecommerce marketplaces; hence

their individual bargaining power is limited.

However, sellers can also list their products on multiple platforms and sites,

including Amazon, Etsy.com and various international e-commerce sites. Hence, if

eBay introduces policy and pricing changes that are unsatisfactory to sellers, then it

could result in lower number of product listings on its marketplace.

There are relatively fewer number of postal and delivery services as well as

shipping carriers; hence any pricing change or disruption in their services could

hamper eBay’s ability to deliver products on time. Hence, these carriers hold some

bargaining power.

A Study on Indian E-Commerce Industry

26

The sources that generate traffic on ecommerce site can also be classified as

suppliers. Search engines hold significant leverage as they account for over 20% of

traffic on eBay if we account for both organic and paid search (according to Similar

Web estimates). Changes in Google SEO (search engine algorithm) have a negative

impact on traffic. eBay’ seller marketplace model leads to large amounts of

unstructured data on the site, which is detrimental to its SEO efforts.

Additionally, several referring sites such as slickdeals.net, dealnews.com and social

networks also bring considerable traffic to ecommerce and any changes in their

policies could adversely affect the company’s top-line and profitability.

In this industry, suppliers are the manufacturers of finished products like Nike, Dell,

Apple etc. Online retail companies sell various products ranging from books to

computer accessories to apparels to footwear. Since there are many suppliers for any

particular category, they can’t show power on online retail companies. For example,

if you take computers category, there are many suppliers like Dell, Apple, Lenovo,

and Toshiba who wants to sell their products through these online retail companies.

So, they won’t be having power to control the online retail companies. Online

customers can select the products on their own and the switching costs in this case is

zero. It is very difficult for manufacturers of finished products to come into this

industry because of challenges in Logistics. Online retail industry is important to

suppliers because it acts as one of the channel to sell the products. Now, with most

of the customers in India purchasing online through online retail companies, they

can’t afford to lose this channel. So, they can’t dictate terms with online retail

companies. So, in this industry the supplier power is low.

4.1.3 Competitive Rivalry within the Industry

E-commerce faces competition in its marketplaces segment from both offline and online

players. Customers can buy products from a wide range of retailers, distributors,

auctioneers, directories, search engines, etc and hence the competition is intense.

A Study on Indian E-Commerce Industry

27

Various factors such as price, product selection and services influence the purchasing

decision of customers. E-commerce companies frequently engage in price-based

competition to woo buyers, which limits their ability to raise prices.

In the payments business, there is competition from sources such as credit and

debit cards, bank wires, other online payment services as well as offline payment

mechanisms including cash, check, money order or mobile phones.

Considering the entry of newer players such as Apple Pay and Alibaba, the

competition is expected to heighten in the online payments space.

Competition is very high in this industry with so many players like Flip kart,

Myntra, Jabong, Snap deal, Amazon, India plaza, Homeshop18 etc.

4.1.4 Threat of New Entrants

Given the nature of the business, there is always a threat of new entrants as it is

relatively less costly to enter the market and setup operations. There is no additional

cost incurred to set up any physical stores and locations. In addition, traditional

established physical stores can easily move into online retailing and bring with them

their substantial consumer base. These stores such as Target or Wal-Mart, already enjoy

economies of scale, have recognizable brands and a strong supply chain. So they do

pose strong competition to Amazon.

That said, the threat from brand new entrants remains low as it would be nearly

impossible for a new company to match the cost advantages, economies of scale and

variety of offering as Amazon.com. These advantages will deter most brand new

entrants to the market.

Substantial Economies of Scale

Ecommerce like Amazon works with over 10,000 vendors and boasts an impressive 75

percent repeat purchasers. Its market capitalization is substantially ahead of its nearest

competitors.

A Study on Indian E-Commerce Industry

28

First Mover Advantage

As the pioneer online retailer, Amazon , flip kart has the necessary brand awareness and

credibility as a strong reliable presence in the market.

Massive product Variety

Way beyond a bookstore now, Amazon.com , Flipkart provides any type of product

there is in its online stores. This means that there is a strong supplier base relationship

that cannot be replicated. In addition, as a bookseller and a provider of other

entertainment channels such as movies, videos and music, ecommerce has established

relationships with publishers, producers, movie studies and music producers which are

not easy to form and replicate.

The e-commerce market is characterized by low barriers to entry. It is relatively easy

for newer players to enter the market and start selling products. Having said that, it’s

difficult for newer players to gain brand recognition and attain high ranking on search

engines.

Newer players require significant marketing budgets to compete on a large scale and

this restricts entry of newer players to an extent.

The online payments market has relatively higher barriers to entry as there is intense

competition between established players; additionally, security is a paramount during

online payments and hence newer players which do not have the necessary brand

recognition will find it difficult to attract new customers.

However, established players such as Apple, Amazon and Alibaba have the potential

to make a dent in PayPal’s strong market position.

‘ Indian government is going to allow 51% FDI in multi-brand online retail and

100% FDI in single brand online retail sooner or later. So, this means foreign

companies can come and start their own online retail companies.

‘ There are very less barriers to entry like less amount of money required to start a

business, less amount of infrastructure required to start business. All you need is

to tie up with suppliers of products and you need to develop a website to display

A Study on Indian E-Commerce Industry

29

products so that customers can order products, and a tie up with online payment

gateway provider like bill desk.

‘ Industry is also going to grow at a rapid rate. It is going to touch 76 billion $ by

2021. Industry is going to experience an exponential growth rate. So, obviously no

one wants to miss this big opportunity.

4.1.5 Threat of substitute products

The threat of substitutes for ecommerce is high. The unique characteristic Amazon has is

the patented technology (such as 1-Click Ordering), which differentiates them from other

possible substitutes. However, there are many alternatives providing the same products

and services, which could reduce Amazon’s competitive advantage. Therefore, Amazon

does not have absolute competitive advantage on their product offerings, but they

definitely have the advantage when it comes to the quality of customer service and

convenience provided.

There is no technology that can substitute the Internet so far in the market. Even, analog

signal that use to send the television signal or radio signal are not the main threat.

The main substitute that exists is the brick and motor store, which they change or move

their place to be on the Internet. Therefore, the e-commerce industry has low threat of

substitutions. When we compare relative quality, relative price of product that he/she

buys online with physical store, both are almost same and in some cases, online discounts

will be available which makes customers to buy products online.

A Study on Indian E-Commerce Industry

30

Porters 5 Force Model of E-Commerce Industry

A Study on Indian E-Commerce Industry

31

CHAPTER 5

MACRO ENVIORNMENTAL

ANALYSIS

A Study on Indian E-Commerce Industry

32

PESTLE ANALYSIS

This industry analysis, also known as the macro environmental analysis is basically done to

determine the conditions in which the industry is operating in. The analysis of these factors

becomes very important for a company operating in that particular industry for the growth

and sustenance.

5.1 POLITICAL AND LEGAL FACTORS:

E-commerce has introduced many changes in the Indian consumers and customers. However,

e-commerce in India has also given rise to many disputes by the consumers purchasing the

products from e-commerce websites. In fact, many e-commerce websites are not following

Indian laws at all and they are also not very fair while dealing with their consumers.

Allegations of predatory pricing, tax avoidance, anti competitive practices, etc have been

levelled against big e-commerce players of India. As a result, disputes are common in India

that is not satisfactorily redressed. This reduces the confidence in the e-commerce segment

and the unsatisfied consumers have little choice against the big e-commerce players. At a

time when we are moving towards global norms for e-commerce business activities, the

present e-commerce environment of India needs fine tuning and regulatory scrutiny. In fact,

India is exploring the possibility of regulation of e-commerce through either Telecom

Regulatory Authority of India (TRAI) or through different Ministries/Departments of Central

Government in a collective manner.

It is obvious that e-commerce related issues are not easy to manage. E-commerce disputes

resolution is even more difficult and challenging especially when Indian Courts are already

overburdened with court cases. Of course, establishment of e-courts in India and use of online

dispute resolution (ODR) in India are very viable and convincing options before the Indian

Government.

Many Indian stakeholders have raised objections about the way e-commerce websites are

operating in India. These websites are providing deep discounts that have been labeled as

predatory by offline traders and businesses. Further, Myntra, Flipkart, Amazon, Uber, etc

have already been questioned by the regulatory authorities of India for violating Indian laws.

A Study on Indian E-Commerce Industry

33

5.1.1 NEED FOR HUMORIZED TAXATION LAWS:

Laws regulating ecommerce in India are still evolving and lack clarity. Favorable regulatory

environment would be key towards unleashing the potential of ecommerce and help in

efficiency in operations, creation of jobs, growth of the industry, and investments in back-end

infrastructure. Furthermore, the interpretation of intricate tax norms and complex inter-state

taxation rules make ecommerce operations difficult to manage and to stay compliant to the

laws. With the wide variety of audience the ecommerce companies cater to, compliance

becomes a serious concern. Companies will need to have strong anti-corruption programs for

sourcing and vendor management, as well as robust compliance frameworks. It is important

for the E-Commerce companies to keep a check at every stage and adhere to the relevant

laws, so as to avoid fines.Myntra, Flipkart and many more e-commerce websites are under

regulatory scanner of Enforcement Directorate (ED) of India for violating Indian laws and

policies. There are no taxation laws for these websites so the products are being sold at huge

discounts in these sites. Some of the major causes not keeping taxation laws over ecommerce

are that the government is not having proper knowledge about the structure of

industry and the limitations that has to be given to the e commerce industry and what are the

rights that should be given to them for selling the product Security of the information

provided during the online transaction is a major concern. Under section 43A of the I T Act

the ‘Reasonable practices and procedures and sensitive personal data or information Rules,

2011’ have been proposed, which provide a framework for the protection of data in India

A Study on Indian E-Commerce Industry

34

5.2 ECONOMIC FACTORS:

Mass usage of internet:

The usage of internet is increasing rapidly in INDIA, it is said that India is said to be the 3rd

largest internet population country after USA and CHINA. Current India internet users are

205 millions .The total no of users are expected to be 330-370 millions in just 3 years. The

internet usage in cities has been increased rapidly.

Increased aspiration levels and availability

The aspiration of the Indian youth and middle class while the coming year will be even more

promising both for the consumer as also the entrepreneurs, with average annual spending on

online purchases projected to increase by 67 per cent to Rs 10,000 from Rs 6,000 per person.

In 2014, about 40 million consumers purchased something online and number is expected to

grow to 65 million by 2015 with better infrastructure in terms of logistics, broadband and

Internet ready devices will be fuelling the demand in ecommerce. The smart phones and

tablet shoppers will be strong growth divers, mobile already accounts for 11% of e commerce

sales, and its share will jump to 25% by 2017. Computer and consumer’s electronics as well

as app

eral and accessories account for the bulk of India retail ecommerce sales will contribute 42%

of its sales.

A Study on Indian E-Commerce Industry

35

Liberal policies (FDI in Retail and Insurance)

The E-commerce Association of India (ECAI) is looking for a positive response from the

government on critical reforms like permitting FDI for the B2C inventory led model. This has

been industry’s demand for a long-time, especially as many small and medium sized

ecommerce players face obstacles on easy access to capital and technology. The industry has

been hoping that the government would at least review a partial opening of the sector to FDI.

In 2015 budget the FDI has been increased to 51 %. So there is a positive response from the

international e commerce industry that is trying to enter into India. Even allibaba.com is also

trying to enter into India with the help of snapdeal.com. But on the other side the small e

commerce companies are finding it difficult because may face difficulty in the future when

the MNC e commerce companies enter India.

Supply chain and the productivity growth

The most important impact of ecommerce is maintained supply chain and the productivity of

the products. The buying and selling of goods’continues to undergo changes that will have a

profound impact on the way companies manage their supply chains. Simply put, e-commerce

has altered the practice, timing, and technology of business-to-business and business-toconsumer

commerce. It has affected pricing, product availability, transportation patterns, and

consumer behavior in developed economy in India. Business-to-business electronic

commerce accounts for the vast majority of total e-commerce sales and plays a leading role in

supply chain network. In 2014, approximately 21 percent of manufacturing sales and 14.6

percent of wholesales are done in India.

A Study on Indian E-Commerce Industry

36

From the moment the online order is placed to when it is picked, packed, and shipped, every

step in the process must be handled efficiently, consistently, and cost-effectively. In ecommerce,

the distribution center provides much of the customer experience. Simply

delivering the goods is no longer an adequate mission for the fulfillment center’customer

satisfaction has to be a critical priority. The typical e-commerce consumer expects a wide

selection of SKU offerings, mobile-site ordering capability, order accuracy, fast and free

delivery, and free returns. Understanding how online consumers shop and purchase across

channels is critical to the success of online fulfillment. More consumers are browsing the

Internet for features and selection, testing products at brick-and-mortar stores, acquiring

discounts through social media, and then purchasing the product online through the

convenience of their mobile device. Some retailers,’including those that also sell through

catalogs’have been in the direct-to-consumer marketplace for some time. These companies

have fulfillment facilities established and information technologies in place to manage orders

with speed and efficiency, doing it well and profitably. But to many distribution executives,

online fulfillment poses a significant challenge to their existing knowledge, experience, and

resources.

5.3 SOCIAL FACTORS:

A Study on Indian E-Commerce Industry

37

Better comfort level and trust in online shopping:

‘ Consumers feel easy to access the ecommerce sites 24×7 support. Customer can do

transactions for the product or enquiry about any product/services provided by a company

anytime, anywhere from any location. Here 24×7 refers to 24 hours of each seven days of a

week.

‘ E-Commerce application provides user more options and quicker delivery of products.

‘ E-Commerce application provides user more options to compare and select the cheaper and

better option.

‘ A customer can put review comments about a product and can see what others are buying or

see the review comments of other customers before making a final buy.

‘ E-Commerce provides option of virtual auctions.

‘ Readily available information. A customer can see the relevant detailed information within

seconds rather than waiting for days or weeks.

Advantages to the society:

‘ Customers need not to travel to shop a product thus less traffic on road and low air pollution.

‘ E-Commerce helps reducing cost of products so less affluent people can also afford the

products.

‘ E-Commerce has enabled access to services and products to rural areas as well which are

otherwise not available to them.

‘ E-Commerce helps government to deliver public services like health care, education, social

services at reduced cost and in improved way.

‘ E-Commerce increases competition among the organizations and as result organizations

provides substantial discounts to customers.

A Study on Indian E-Commerce Industry

38

5.4 Technological factors:

Cloud computing in e commerce:

According to analysts, within 10 years’ time 80% of all computer usage worldwide, data

storage and e-commerce will be in the cloud. It is called the third phase of the internet.

During the first phase software and operating systems were combined to create a simple flow

of communication through ‘ for instance ‘ email. The second phase brought the user to the

World Wide Web, where he had access to millions of websites. This increased internet usage

by a factor of one hundred in only 2 years’ time. In the third phase everything is in the cloud,

both data and software.

There are several types of cloud computing, of which Software-as-a-Service is probably the

best-known. The others are Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service

(IaaS).

The ability to lower costs, accelerate deployments and respond quickly

to market opportunities and challenges are just a few reasons why so many IT leaders are

leveraging cloud based e commerce applications. Given the variety of solutions, IT leaders

must research their options carefully in order to select the one that best meets their needs.

Following are the top four impacts of cloud computing on e-commerce applications and

steps IT leaders should take during their evaluation process.

It’s easy for business leaders to focus on the benefits of cloud computing without considering

the time and effort involved in implementing a viable solution. However, whatever cloud

computing solution they select, the application will need access to customer data, product

data, fulfilment systems and other operational systems in order to support e-commerce. Cue

the IT team.

A Study on Indian E-Commerce Industry

39

Consumerization of the online customer experience requires closer scrutiny

of e commerce:

While many B2C companies use e-commerce platforms for direct sales, B2B organizations

are also leveraging them to add transactional capabilities to their informational sites. In

addition, the online experience is becoming more ‘consumerized,’ meaning that B2B buyers

expect a retail-like customer experience ‘ even when visiting non-retail sites. Cloud solution

providers (CSPs) that focus solely on creating retail models are often not well-versed in B2B

requirements which can be more complex. As a result, their offerings don’t include B2B

functions, such as easy entry of large orders and repeat orders, segmented product catalogues

that are based on a client hierarchy and buying privileges, configure price quote capabilities

and extended payment terms. IT leaders have an unprecedented number of CSPs from which

to choose. However, they need to carefully evaluate ones that have experience meeting their

industry-specific needs, whether it’s B2B, B2C, or a combination of both.

Usage of bandwidth for E-commerce:

Transmission capacity of a communication channel is a major barrier for products that

require more graphical and video data. For this e commerce companies need higher

bandwidth than usual. These all depend on the no of the customers visiting websites, the

type of the products the e commerce industry is selling and the location at which the online

users are mostly visiting the website. These all are some of the factors that affect the usage

of bandwidth. Web processing is also some of the key factors that make e commerce

industry to run. Another key factor is: High cost of developing, purchasing new software,

licensing of software, integration into existing systems, costly e business solutions for

optimizations.

Benefits of using cloud computing over E-commerce:

‘ Trust. Cloud computing enables online store owners to use the same platform and use

the same functionality. That means that new features can be made available to

A Study on Indian E-Commerce Industry

40

everyone with a simple modification. Moreover, the maintenance is taken care of

centrally, which means that store owners can rely on a stable platform.

‘ Cost saving. In many cases this is the most important reason for companies to choose

cloud computing. Since companies do not need to purchase hardware or bandwidth,

costs can be decreased by 80%.

‘ Speed. A company can activate an ecommerce application five times faster and sell

directly through a platform that is managed remotely.

‘ Scalability. Cloud computing makes a company more elastic and able to respond to

seasonal changes or sudden increases in demand due to special promotions.

‘ Security. Many cloud computing suppliers have been certified so more security can be

guaranteed to customers.

‘ Data exchange. The explosive growth of cloud ecommerce will lead to more data

exchange between the clouds. Suppliers will offer more and more possibilities to add

features to their clouds for users, partners and others.

A Study on Indian E-Commerce Industry

41

CHAPTER 6

PERFORMANCE ANALYSIS

A Study on Indian E-Commerce Industry

42

6. FIRMS UNDER STUDY

‘ Flipkart

‘ Snapdeal

‘ Amazon

Four key metrics have been used to evaluate the performance of

E-Retailers in India.

1. Gross Margin

2. Subscriber Growth Rate

3. Average Order Size:

4. Percentage of Mobile Visits

6.1 Gross Margin (Financial year 2013-2014)

Online shopping in India is growing at a very fast clip. At the same time, there is an intense

competition in ecommerce space, especially among the top 3 players. There is aggressive

pricing and discounts are being paid by Venture Capitalists’ pockets.

Flipkart, Amazon and Snapdeal, all of them have raised investments or have commitments of

$1 Billion or more. This money is being burned to acquire new customers, offer discounts

and pump up products on offers. At the same time these sites are losing money, the quantum

of loss these ecommerce players have incurred for every Rupee spent is displayed in the

below figure.

A Study on Indian E-Commerce Industry

43

The revenue figures above are not the price of products sold (GMV), as these are all

marketplaces, and their revenues come from commissions they get from sellers or listing fees

that they charge to list the products on their site.

GMV or Gross Merchandize Value represents the price of products sold and net revenues are

just a fraction of that.

Flipkart leads the race with net revenue of 179 crore followed by Amazon at 168.9 crore and

Snapdeal at 154.11 crore.

However, when it comes to losses, Flipkart leads by a much bigger margin and their loss for

2013-14 stands at Rs. 400 Crore. Comparatively, Amazon losses are pegged at Rs. 321.3

crore and Snapdeal had least losses of 3 with 264.6 crore.

The figure below shows the loss each player incurs for every rupee in net revenues.

A Study on Indian E-Commerce Industry

44

Flipkart leads the race to losing 2.23 rupees for every 1 rupee of revenue. Amazon loses 1.90

and Snapdeal has least amount of losses at Rs. 1.72.

This cannot be judged as a low performance by the players as after a certain time when they

gain major part of market. Every product they sell would be a profit for them.

6.2 Subscriber Growth Rate

Flipkart was founded in the year 2007. By the end of year 2013 they have acquired 22

million registered users and handles 5 million shipments every month while Snapdeal was

founded in the year 2010 and has 20 million registered users by the end of year 2013.

Snapdeal has acquired customers in a quicker pace when compared to Flipkart, but this

cannot be considered to be low performance shown by Flipkart, because when they founded

e- retail in India, the people were not much familiar to e-commerce and online purchasing at

that period.

6.3 Average ordering size of Flipkart for Financial year 2011-2012

Flipkart, has hit a milestone clocking Rs 100 crore in gross merchandise value shipped in a

month for the first time in June 2012. The jump is from an average of around Rs 42 crore

financial year 2010-2011. Flipkart had clocked Rs 500 crore for the 12 months ended March

31, 2012.

In the year 2012 the number of daily orders has hit 25,000 mark (or seventeen orders per

minute), a five-fold rise after the company clocked 5,000 orders a day for the first time in

May 2011. Flipkart first clocked 1,000 orders a day in March 2010.

Average order size= Total Revenue/Number of Orders

=Rs.5000000000 / 25,000 x 365

=Rs.548

A Study on Indian E-Commerce Industry

45

6.4 Percentage of Mobile Visits

Mobile is now one of our most strategic channels for driving revenue and customer acquisition. The eretailers

are investing to build strong technology and marketing platforms that will allow them to

accelerate their growth on mobile

Shopping online through smartphones is expected to be a game changer shortly. In the year 2014

there were nearly 123 million smartphone users in India. Affordability of smartphones is

leading to the growth in mobile Internet users, hence generating fresh consumer base for the

online players. Mobile Internet users in India are estimated to be 120 million compared to 100 million

users using Internet on their personal computers.

Snapdeals

60 percent of Snapdeals orders are coming over mobile in the end of year 2014. It is growing really fast.

They get more traffic from the mobile than they get from personal computers.

Flipkart

Flipkart a year ago, less than 10 percent of their orders, transactions and visits used to come from mobile

commerce.

Now those numbers are greater than 50 percent for them. It is accelerating at a very rapid pace. Flipkart

is seeing more than 2 times or 3 times growth from the mobile front compared to desktop,

where Flipkart is growing overall but mobile is growing at a much faster pace.

A Study on Indian E-Commerce Industry

46

6.5 Top 10 Indian E-Commerce Sites Traffic Comparison

Stats have been taken for month of April 2014

Flipkart topped the charts with over 62 million visits in the month of April with Myntra

coming shade lower at 59.5 million. Given that both of them have now come together, purely

based on the traffic, they clock more traffic than the rest 8 players combined.

While we expected either Amazon or Snapdeal to grab the 3rd place in terms of traffic,

Jabong took the bronze position with 42.5 million visitors followed by Snapdeal (31.4

million).

Amazon.in clocked a respectable 27.6 million visits in month of April (Remember, they have

not even completed a year since launch as yet). Also, if you combine Junglee, which is

owned by Amazon, their traffic bulges to close to 40 million visits.

Infibeam and Tradus both have not been doing too well in terms of traffic. They clocked 3.4

million & 3 million respectively. Also, according to similar web, their traffic has been

steadily dropping over last 6 months. Both of them had close to 5 million visits at the start of

the year.

A Study on Indian E-Commerce Industry

47

Stats have been taken for month of April 2014

When it came to user engagement, Flipkart again reigned supreme with each visitor spending

an average of 8:35 minutes per visit. Ebay also had very high levels of engagement with 8:15

mins followed by Snapdeal (7:49 mins).

Myntra had surprisingly low (in fact lowest of all) visitor time spent at 3:04 minutes. Junglee

and Jabong were other two sites who had low visitor time spent.

Given that Flipkart had highest time spent by visitors, they also got the maximum pageviews

per visitor (8.53) followed by Ebay (8.04). Surprisingly, Tradus did quite well in terms of

page views with a average of 7.57 views per visit.

6.6 Conclusion

Ecommerce industry in India is in their blooming stage now. E-commerce including online

retail in India constitutes a small fraction of total sales, but is set to grow to a substantial

amount owing to a lot of factors such as rising disposable incomes, rapid urbanization,

increasing adoption and penetration of technology such as the internet and mobiles, rising

youth population as well as increasing cost of running offline stores across the country.

A Study on Indian E-Commerce Industry

48

REFERENCES

‘ https://www.drivingbusinessonline.com.au/articles/5-examples-of-great-e-commercesites/

‘ http://www.jeffbullas.com/2009/09/01/5-case-studies-on-companies-that-win-atsocial-

media-and-ecommerce/

‘ http://www.atuljain7.com/consumer-centric-e-commerce-business-models-in-india

MACRO ENVIONMENTAL ANALYSIS

‘ https://www.academia.edu/3832983/Cloud_Computing_and_E-commerce

‘ http://www.bertramwelink.com/index.php/cloud-computing-taking-over-ecommercemarket/

‘ http://www.maaspros.com/blog/much-bandwidth-need-ecommerce-website

‘ http://www.netmagicsolutions.com/resources/case-study-flipkart

‘ http://www.shopify.in/tour/ecommerce-hosting

‘ http://www.tutorialspoint.com/e_commerce/e_commerce_advantages.htm

‘ http://www.supplychainquarterly.com/columns/scq201102monetarymatters/

‘ http://www.inboundlogistics.com/cms/article/maximizing-productivity-in-e-commercewarehousing-

and-distribution-operations/

‘ http://www.business-standard.com/article/news-cm/e-commerce-to-fire-consumeraspiration-

higher-in-2015-assocham-pwc-study-114122900231_1.html

‘ http://indiamicrofinance.com/ecommerce-business-india-2014-2015-report-pdf.html

‘ http://www.studymode.com/subjects/political-issues-in-e-commerce-page1.html

‘ http://ecommercelawsinindia.blogspot.in/

COMPITITIVE ANALYSIS

‘ http://www.entrepreneurial-insights.com/threat-of-new-entrants-porters-five-forcesmodel/

‘ http://www.forbes.com/sites/greatspeculations/2014/11/24/ebay-through-the-lens-ofporters-

five-forces/

A Study on Indian E-Commerce Industry

49

MARKET SHARE VALUE

‘ http://www.business-standard.com/article/companies/jabongs-revenue-rose-50-times-infy13-

114112001047_1.html

‘ http://www.business-standard.com/article/companies/snapdeal-raises-100-mn-eyes-1bnrevenue-

this-year-114052100665_1.html

‘ https://www.google.co.in/search?q=naaptol+sales+revenue&rlz=1C1GGGE_enIN618IN618&

oq=naaptol+sales+revenue&aqs=chrome..69i57.8943j0j4

‘ http://articles.economictimes.indiatimes.com/2014-12-16/news/57112180_1_amazonindia-

ebay-india-latif-nathani

PERFORMANCE ANALYSIS

‘ http://blog.bigcommerce.com/7-key-ecommerce-metrics/

‘ http://trak.in/tags/business/2014/11/06/flipkart-amazon-snapdeal-revenues-lossescomparison/

‘ http://www.flipkart.com/

‘ http://www.snapdeal.com/

‘ http://trak.in/tags/business/2014/06/04/top-10-indian-e-commerce-sites-comparison/

‘ http://techcircle.vccircle.com/2012/07/03/excl-flipkart-hits-rs-100cr-monthly-sales-marknow-

serving-seven-orders-per-minute/

‘ http://www.medianama.com/2014/05/223-snapdeal-mobile-transactions/

‘ http://www.iamwire.com/2014/12/myntra-set-mobile-only-company-2015/107014

‘ http://gadgets.ndtv.com/mobiles/news/m-commerce-to-contribute-up-to-70-percentof-

online-shopping-experts-628106

GLOBAL AND INDIAN SCENARIO

‘ http://dipp.nic.in/English/Discuss_paper/Discussion_paper_ecommerce_07012014.pdf

‘ http://www.pwc.in/assets/pdfs/publications/2014/evolution-of-e-commerce-in-india.pdf

‘ http://www.iamwire.com/2015/01/e-commerce-vs-indian-e-commerce-identifying-missingpieces/

108066

Electronic banking

DEFINITION OF ELECTRONIC BANKING

The term electronic banking means all day access to cash through an Automated teller machine or direct deposit of pay checks into checking or savings account. But electronic banking involves many different types of transactions, rights, responsibilities and sometimes, fees. Electronic banking can also be defined in a very simple form, it can mean provision of information or services by a bank to its customers, via a computer, television, telephone, or mobile phone.

ORIGIN OF ELECTRONIC BANKING IN NIGERIA

During the Structural Adjustment Programme (SAP) in 1986, in Babangida regime brought to an end the kind of banking services rendered by the first generation of banks in Nigeria. The SAP changed the content of banking business. Just as the number of banks increases from 40 in 1985 to 125 in 1991, the SAP provided licence to more banks which posed more threat to existing ones and the more aggressive the marketing techniques adopted by them. In the process competition among themselves, the adoption of electronic banking was put in place in order to maintain a good competitive position.

EVOLUTION OF ELECTRONIC BANKING IN NIGERIA

Banking as come from a very long way from the periods of ledger card and other manual filling system to a period of computer age. Computerization in the banking industry was first introduced in the 1970s by Society General Bank (Nigeria) Limited. Until the mid-1990, the few banks that were computerised made use of the Local Area Network (LAN) within the banks. The sophisticated ones among the banks then implemented the WAN by linking branches within cities while one or two implemented intercity connectivity using leased lines (Salawu and Salawu, 2007).

Banks have adopted technology to their operations and have advanced from very simple and basic retail operations of deposits and cash withdrawal as well as cheque processing to the delivery of sophisticated products which came as a result of keen competition in view of unprecedented increases in the number of banks and branches. There was the need to modernize banking operation in the face of increased market pressure and customers demand for improved service delivery and convenience. According to Sanusi (2002) as cited by Dogarawa (2005). The introduction of e-banking (e-payment) products in Nigeria commenced in 1996 when the CBN granted All States Trust Bank approval to introduce a closed system electronic purse. It was followed in February 1997, with the introduction of similar product called ‘Pay card’, by Diamond Bank.

CBN additionally gave permission to a number of banks to introduce international money transfer products, on-line banking via the internet, and telephone banking though on a limited scale. It must also be stated that the deployment of Automated teller machine (ATM) by some banks to facilitate card usage and enhance their service delivery. Today, nearly all banks in Nigeria make use of a website. The service or ordering bank drafts or certified cheque made payable to third parties has also been increasingly automated (Irechukwu, 2000).

CHANNELS OF ELECTRONIC BANKING PRODUCT IN NIGERIA

The revolution in the Nigerian banking system which led to the increase in paid up capital from N2 billion to N25 billion effective from 1st of January 2006. This result to liquidation of weak banks in Nigeria that could not find merger partners. The revolution brought about changes to banking operations in Nigeria with aggressive competition among various banks. Each banks came up with new products, repackaged the old ones and came up with more efficient service delivery strategies. This more efficient service delivery was made possible through investment in information and communication technology (ICT) (Sanni, 2009). The huge investment in ICT has been the backbone of electronic banking, using different distribution channels. It should be noted that electronic banking is not just banking via the Internet. The term electronic banking can be described in many ways.

PC BANKING

Personal Computer used by customers allows the customer to use all e-banking facility at home without them going to the bank. It gives consumers a variety of services so they can move be able to move money between accounts, pay their bills, check their account balances, and buy and sell goods.

MOBILE BANKING

Mobile phones are used a lot for financial services in Nigeria. Banks enable their customers to conduct banking services such as account inquiry and funds transfer through the mobile telephone.

AUTOMATED TELLER MACHINE

This is an electronics device provided by the banks which allows bank’s customers to make to withdraw cash and to check their account balances at any time of the day without the need for a human teller. Many ATMs also allow people to transfer money between their bank accounts or even buy postage stamps. To withdraw cash, make deposits, or even transfer funds between accounts, you will insert your ATM card and enter your personal identification number known as pin. Some ATMs impose usage fee on consumer who are not member of their institution. ATMs must allow the customers to be aware of the fee that will be charged provided on the terminal screen or on a sign next to the screen. If one incurred a loss or stolen ATM card, he or she should notify the bank.

SMART CARDS

This is involves conducting of banking transactions through the use of electronic cards such as (Value Card, ATM Card, Debit Card, Credit Card etc.). It makes it easy for bank customers to have access to cash, carry out transfers and make enquiries about their accounts balance without them visiting the banking hall.

(i) Credit cards: These are cards that are plastic in nature encoded with electromagnetic identification. Credit cards allow the holders of the cards to make purchases without any immediate cash payment. Credit limit is fixed by the issuing banks based on the financial history of the user.

(ii) Debit card: When compare with credit cards is an instrument which enables an immediate charge or debt into cardholders account on the sales of goods and services made to him or her in other words the holder is using the balance standing in is deposit account

POINT OF SALE TERMINAL (POS)

A Point of Sales (POS) Terminal are machines that are used to accept cards for payment of goods and services. POS Terminal allows owners of card to have a real-time online access to funds and information in his or her bank account through the use of debit or cash cards.

TRANSACTION ALERT

Customers carry out debit or credit transactions on their accounts on daily basis and the need to keep track of those transactions prompted the creation of the alert system by the Bank to notify customers of those transactions when it take place. The alert system also notify or reach out to customers when necessary information need to be communicated.

ELECTRONIC DATA INTERCHANGE (EDI)

The transfer of information between organizations in machine readable form.

INTERNET BANKING

Internet banking permits bank customers to perform or conduct transaction on the account from any location across the world such as making enquiries, bills payment etc. with speedy respond through the web and email system online.

ELECTRONIC CHEQUE

This allows users of the internet to pay their bills directly over the internet without having to send the paper cheque.

BENEFIT OF ELECTRONIC BANKING

Electronic banking is important to both customers and banks in various ways

To banks;

(1) Improve customer service: Electronic banking allow banks to provide new, faster and better service to its customers, thereby, bringing up the banks to international standards and enhancing competition among other banks.

(2) Reliability of transaction: when transaction are done manually, error is prone to happen, but Electronic banking helps to ensure accurate and timely transactions.

(3) Satisfy: Electronic banking ensures the safety of bank dealing with their customers. Unsafe banking practice can cause huge loses to the bank especially in the cause of misrepresentation of account owners.

(4) Reduction in workload: Due to introduction in electronic banking the workload of the bank as reduce as more people conduct their various transactions electronically rather than them coming to the bank.

(5) Information: Electronic banking makes it easy for banks to convey information about their service to customers through the use of internet, banks can also easily communicate or send information to customers through the use of Electronic mail.

These are some of the services provided by banks to customers, they can also provide statement of account easily to their customers, by sending it to their e-mails, this will make it more comfortable for the customers.

To customer;

Electronic banking provide various benefit to customers such as;

(1) Availability of cash: Electronic banking makes it easy for customers to easily get cash any time they need it from their account, this is possible through the use of Automated Teller Machine (ATM)

(2) Stress Relieve: Since transaction can be done anywhere through the use of electronic banking, this will make customers comfortable.

(3) Payment of bills: It is easy for customers to pay their bills such as PHCN bills (Power holding company of Nigeria) Payment for DSTV card when it as expire. This is possible because banks as provide various means for this payment to occur such as Quick teller etc.

(4) Access to Information: Bank customers can easily get information from their banks about the provision of new product or about a problem that as occur.

(5) For Consumers: Increase convenience for customers, more service options for customers, reduced risk of cash-related crimes, cheaper access to banking services and access to credit.

REASONS FOR AUTOMATION OF BANKING OPERATION

According to Idowu (2005), the following are the reasons for adoption of e- banking in Nigeria; (a) to the bank

(1) Facilitation for easy decision making

(2) Availability of quality information

(3) Improve in service delivery

(4) Development of new product

(5) Savings in space and running costs

(6) Relevance among league of global financial institution.

(b) To the customer

(1) The quality of service they enjoy

(2) Reduction in time being spent at banking halls

(3) Confidentiality

(4) Statement of account obtain easily

(5) 24 hours service delivery.

(6) Customer account could be accessed almost anywhere in the world

(c) To the economy

(1) Creation of jobs

(2) Improvement in commerce

(3) Development in technology

(4) Data bank for National planning

CHALLENGES OF E-BANKING IN NIGERIA

Some of the problems facing electronic banking in Nigeria are;

(1) MONEY LAUNDERING: Money laundering can be defined as a derivation of washy money from illicit activities especially drugs trafficking, advance fee fraud and other forms illegal activities. Development in Electronic banking makes it possible to transact business electronically which can be used to launder money.

(2) FRAUD: Fraud which literally means a conscious and deliberate action by a person or group of persons with the intention of altering the truth or fact for selfish personal gain. The high exposure of the system to fraudsters, and other criminally minded persons who could access confidential information from the system if security measures are weak to check personal files is a challenge of electronic banking.

(3) CONSUMER PROTECTION: Another problem of electronic banking is the absence of regulatory body to protect the consumers of the product or services.

(4) SYSTEMS OPERATIONAL RISKS: Bank rest on the use of electronic banking to conduct business which could result to system failure.

(5) POOR NETWORK: Bad network is a major challenge facing electronic banking in Nigeria, poor network can lead to inability to withdraw money from the Automated Teller Machine (ATM), inability to send alert to the customer if money has been deposited into is account or if money has been deducted from is account.

(6) LITERACY ISSUES: This can be refer to as a situation when all targeted people are not educated, while some do not know how to make use of electronic banking. For instance, a dubious businessman may see a customer finding it difficult to operate the POS (POINT OF SALE) terminal and decided to deduct more than what the person consume.

THREATS OF CYBER ‘ CRIMES ON THE NIGERIAN BANKING PREMISES

Fraudsters or 419, which is one of the most popular of all internet frauds in Nigeria, Has its origin from Nigeria in the 1980s. Its development and spread started through the developments in information technology at inception. Later in the early 1990s, it became integrated into the telecommunication facilities such as fax and telephone from the late 1990s following the introduction of internet and computers, 419 crimes became prevalently perpetrated through the use of e-mail and other internet means (Amedu, 2005). The latest dimension taken by this fraudsters is the use of fake internet bank site, and using it to encourage victims to open accounts with them. These issues basically causes problem to electronic banking, which includes confidentiality, integrity and availability.

Several factors are responsible for the above situation. They include weakness of the judicial institutions to make and enforce laws on cyber-crimes; inordinate tolerance for corruption among Nigerian public and government agencies; unemployment among graduates, and the gap between the rich and the poor caused mainly by bad governance. In the main, erosion of good value principles and corruption constitute the greatest cause of rising cyber-crimes among Nigerian (Domestic electronic payment in Nigeria) (Amedu, 2005).

CUSTOMER SATISFACTION

Jamal (2003) defined customer satisfaction as the meeting of one’s expectations relating to the product used by the customer; these are sentiments and feelings about the product used by the customer. Previous studies (Schultz and Good, 2000; Churchill and Surprenant, 1982; and Patterson, (1993) they agreed that the service performance has direct impact on customer satisfaction. They believed that interaction between them and the customer plays a key role in organizational success or failure and customer satisfaction is a critical performance indicator. File and Prince, (1992) according to File and Prince they explained that satisfied customers will be loyal to the organization and they will tell others about their favourable experience thereby leading to positive word of mouth advertising. Sahereh et al, (2013) identified ten (10) factors influencing satisfaction as follows:

(1) Properly behaviour with friendly: Being polite and friendly to customers will definitely generate more customers and will make customer relationship with bank strong. Friendly service is a necessary condition for development of activities and impress a good name about the bank.

(2) To speed in delivery of services: Anything that leads to customer satisfaction will help them to reach their goal earlier.

(3) Accuracy in providing services: This factor wants to minimize in error rate of doing things and improving quality of work to the standards and acceptable by the people so as to result to the trust and confidence of customers and increasing their satisfaction.

(4) Standard-Oriented: If customers have to ensure that the relationship does not rule and providing facilities request them is done based on standard and criteria, trust isn’t deprive and will not lead to their disappointment.

(5) Interest of deposits: Without doubt depositors are attending to the actual interest that should be considered inflation and other costs carefully.

(6) Secrecy: The banks customers expect that bank personnel in maintaining statements of accounts function or other financial issues do not disclose their account to anybody even their closest relatives.

(7) Skills of personnel: based on the researches done the necessary conditions for employment post include: The ability to move, speed in the work, balancing, and the ability of such.

(8) Guiding and presenting the necessary information and helpful: The right guidance on how to use customers from service will result to the speed of work and customer satisfaction.

(9) Discipline: Discipline is a very important features in all aspect of human life. Discipline led to focusing on the work and higher level of service delivery.

(10) Ease of access to services: Banks could easily apply to most services, this will result to greater customer satisfaction.

BANK CUSTOMER RELATIONSHIP

Bank customer relationship, is a special contract where a person entrusts valuable items (the customer) with another person (the bank) with the intention that such items shall be retrieved on demand from the keeper by the person who so entrust. The banker is the one entrusted with above mentioned valuable items, while the person who entrust the items a view to retrieving it on demand is called the customer.

The relationship between the bank and customer is based on contract. It is based on certain terms and conditions. For instance, the customer has the right to collect his money on demand personally or by proxy. The banker is under obligation to pay, so long the proxy is duly authorized by the customer. The terms and conditions governing the relationship should not be allowed to be leaked to a third party, particularly by the banker. Also the items kept should not be released to a third party without authorization by the customer.

A key issue here is how to handle the rising level of frauds prevalent in the entire banking system, and how to make the Internet banking fit so well in the banking structure of a country.

GUIDELINES ON ELECTRONIC BANKING IN NIGERIA

TECHNOLOGY AND SECURITY STANDARDS

CBN will monitor the technology acquisitions of banks, and all investments in technology, which exceed 10% of free funds, will henceforth be subject to approval. Where banks use third parties or outsource technology, banks are required to comply with the CBN guidelines.

STANDARDS FOR COMPUTER NETWORKS & INTERNET

(a) Networks used for transmission of financial data must be demonstrated to meet the requirements specified for data confidentiality and integrity.

(b) Banks are required to deploy a proxy type firewall to prevent a direct connection between the banks back end systems and the Internet.

(c) Banks are required to ensure that the implementation of the firewalls addresses the security concerns for which they are deployed.

(d) For dial up services, banks must ensure that the modems do not circumvent the firewalls to prevent direct connection to the bank’s back end system.

(e) External devices such as Automated Teller Machines (ATMs), Personal Computers, (PC’s) at remote branches, kiosks, etc. permanently connected to the bank’s network and passing through the firewall must at the minimum address issues relating to non-repudiation, data integrity and confidentiality. Banks may consider authentication via Media Access Control (MAC) address in addition to other methods.

(f) Banks are required to implement proper physical access controls over all network infrastructures both internal and external.

STANDARDS ON PROTOCOLS

Banks must take additional steps to ensure that whilst the web ensures global access to data enabling real time connectivity to the bank’s back-end systems, adequate measures must be in place to identify and authenticate authorized users while limiting access to data as defined by the Access Control List.

Banks are required to ensure that unnecessary services and ports are disabled.

Standards on Application and System Software

(a) Electronic banking applications must support centralized (bank-wide) operations or branch level automation. It may have a distributed, client server or three tier architecture based on a file system or a Database Management System (DBMS) package. Moreover, the product may run on computer systems of various types ranging from PCs, open systems, to proprietary main frames.

(b) Banks must be mindful of the limitations of communications for server/client-based architecture in an environment where multiple servers may be more appropriate.

(c) Banks must ensure that their banking applications interface with a number of external sources. Banks must ensure that applications deployed can support these external sources (interface specification or other CBN provided interfaces) or provide the option to incorporate these interfaces at a later date.

(d) A schedule of minimum data interchange specifications will be provided by the CBN.

(e) Banks must ensure continued support for their banking application in the event the supplier goes out of business or is unable to provide service. Banks should ensure that at a minimum, the purchase agreement makes provision for this possibility.

(f) The bank’s information system (IS) infrastructure must be properly physically secured. Banks are required to develop policies setting out minimum standards of physical security.

(g) Banks are required to identify an ICT compliance officer whose responsibilities should include compliance with standards contained in these guidelines as well as the bank’s policies on ICT.

(h) Banks should segregate the responsibilities of the Information Technology (IT) security officer / group which deals with information systems security from the IT division, which implements the computer systems

STANDARD ON DELIVERY CHANNELS

Mobile Telephony: Mobile phones are increasingly being used for financial services in Nigeria. Banks are enabling the customers to conduct some banking services such as account inquiry and funds transfer. Therefore the following guidelines apply:

(a) Networks used for transmission of financial data must be demonstrated to meet the requirements specified for data confidentiality, integrity and non- repudiation.

(b) An audit trail of individual transactions must be kept.

Automated Teller Machines (ATM): In addition to guidelines on e-banking in general, the following specific guidelines apply to ATMs:

(a) Networks used for transmission of ATM transactions must be demonstrated to meet the guidelines specified for data confidentiality and integrity.

(b) In view of the demonstrated weaknesses in the magnetic stripe technology, banks should adopt the chip (smart card) technology as the standard, within 5 years. For banks that have not deployed ATMs, the expectation is that chip based ATMs would be deployed. However, in view of the fact that most countries are still in the magnetic stripe conversion process, banks may deploy hybrid (both chip and magnetic stripe) card readers to enable the international cards that are still primarily magnetic stripe to be used on the ATMs.

(c) Banks will be considered liable for fraud arising from card skimming and counterfeiting except where it is proven that the merchant is negligent. However, the cardholder will be liable for frauds arising from PIN misuse.

(d) Banks are encouraged to join shared ATM networks.

(e) Banks are required to display clearly on the ATM machines, the Acceptance Mark of the cards usable on the machine.

(f) All ATMs not located within bank premises must be located in a manner to assure the safety of the customer using the ATM. Appropriate lighting must be available at all times and a mirror may be placed around the ATM to enable the individual using the ATM to determine the locations of persons in their immediate vicinity.

(g) ATMs must be situated in such a manner that passers-by cannot see the key entry of the individual at the ATM directly or using the security devices.

(h) ATMs may not be placed outside buildings unless such ATM is bolted to the floor and surrounded by structures to prevent removal.

(I) Additional precaution must be taken to ensure that any network connectivity from the ATM to the bank or switch are protected to prevent the connection of other devices to the network point.

(j) Non-bank institutions may own ATMs, however such institutions must enter into an agreement with a bank for the processing of all the transactions at the ATM. If an ATM is owned by a non-bank institution, processing banks must ensure that the card readers, as well as, other devices that capture/store information on the ATM do not expose information such as the PIN number or other information that is classified as confidential. The funding (cash in the ATM) and operation of the ATM should be the sole responsibility of the bank. (k) Where the owner of the ATM is a financial institution, such owner of the ATM must also ensure that the card reader as well as other devices that capture information on the ATM does not expose/store information such as the PIN number or other information that is classified as confidential to the owner of the ATM.

(l) ATMs at bank branches should be situated in such a manner as to permit access at reasonable times. Access to these ATMs should be controlled and secured so that customers can safely use them within the hours of operations. Deplorers are to take adequate security steps according to each situation subject to adequate observance of standard security policies.

(m) Banks are encouraged to install cameras at ATM locations. However, such cameras should not be able to record the keystrokes of such customers.

(n) At the minimum, a telephone line should be dedicated for fault reporting, and such a number shall be made known to users to report any incident at the ATM. Such facility must be manned at all times the ATM is operational.

INTERNET BANKING

Banks should put in place procedures for maintaining the bank’s Web site which should ensure the following:-

(a) Only authorized staff should be allowed to update or change information on the Web site.

(b) Updates of critical information should be subject to dual verification (e.g. interest rates)

(c) Web site information and links to other Web sites should be verified for accuracy and functionality.

(d) Management should implement procedures to verify the accuracy and content of any financial planning software, calculators, and other interactive programs available to customers on an Internet Web site or other electronic banking service.

(e) Links to external Web sites should include a disclaimer that the customer is leaving the bank’s site and provide appropriate disclosures, such as noting the extent, if any, of the bank’s liability for transactions or information provided at other sites.

(f) Banks must ensure that the Internet Service Provider (ISP) has implemented a firewall to protect the bank’s Web site where outsourced.

(g) Banks should ensure that installed firewalls are properly configured and institute procedures for continued monitoring and maintenance arrangements are in place.

(h) Banks should ensure that summary-level reports showing web-site usage, transaction volume, system problem logs, and transaction exception reports are made available to the bank by the Web administrator.

LEGAL ISSUES

(a) Banks are obliged not only to establish the identity of their Customers (KYC principle) but also enquire about their integrity and reputation. To this end, accounts should be opened only after proper introduction and physical verification of the identity of the customer.

(b) Digital signature should not be relied on solely as evidence in e-banking transactions, as there is presently no legislation on electronic banking in Nigeria

(c) There is an obligation on banks to maintain secrecy and confidentiality of customer’s accounts. In e-banking scenario, there is the risk of banks not meeting the above obligation. Banks may be exposed to enhanced risk of liability to customers on account of breach of secrecy, denial of service etc. because of hacking /other technological failures. Banks should, therefore, institute adequate risk control measures to manage such risks.

(d) Banks should protect the privacy of the customer’s data by ensuring:

(1) That customer’s personal data are used for the purpose for which they are compiled. (2) Consent of the customer must be sought before the Data is used

(3) Data user may request, free of cost for blocking or rectification of inaccurate data or enforce remedy against breach of confidentiality

(4) Processing of children’s data must have the consent of the parents and there must be verification via regular mail.

(5) Strict criminal and pecuniary sanctions are imposed in the event of default.

(e) In e-banking, there is very little scope for the banks to act on stop payment instructions from the customers. Hence, banks should clearly notify the customers the time frame and the circumstances in which any stop-payment instructions could be accepted.

(f) While recognizing the rights of consumers under the Nigerian Consumer Protection Council Act, which also apply to consumers in banking services generally, banks engaged in e-banking should endeavour to insure themselves against risks of unauthorized transfers from customers account’s, through hacking, denial of services on account of technological failure etc., to adequately insulate themselves from liability to the customers.

(g) Agreements reached between providers and users of e-banking products and services should clearly state the responsibilities and liabilities of all parties involved in the transactions.

12 years a slave: college essay help

According to Drew Faust, author of Culture, Conflict, and Community, There was a slave owner named James Henry Hammond who did not really have any idea of how to control the slaves. He did not know what to do and how to command them. He had been married into it. He began to listen to his friends who had suggested to ‘Be kind to them make them feel an obligation and by all means keep all other Negros away from the place, and make yours stay at home- and raise their church to the ground-keep them from fanaticism for God’s sake and your own.’ So he did just that. He began to tear down churches just so they could assimilate more into the white Churches and hopefully show up by taking their churches away. They began to become less religious for quite some time but then they began to rise up again. They started to act lazy and defiant because of the lack of authority. ‘The slaves, accustomed to a far less rigorous system of management, resented his attempts and tried to undermine his drive for efficiency.’ Because of the disobedience he began to severely punish them. Constantly beat them senseless if they did not follow orders. That was the norm throughout most slave owners. They would casually beat their slaves for disobeying their master and sometimes even for the hell of it. This is evidently clear in 12 years a Slave.

In 12 Years a Slave, an African American man named Solomon Northup is a free man in the North, living in New York with this wife and two children. He is a savvy violinist and is approached by two individuals asking if he wants to perform for a circus they are opening up in Washington which would pay greatly. He agrees but is drugged and sent back to the South under the name Platt, a runaway slave from Georgia. He gets sold to a plantation but is later sent to another. The reason why will be later stated. In this second plantation, his owner John Epps is known as not being the nicest slave owner. He was actually known for being incredibly cruel for those who disobey his order. He interprets the bible in a way saying that if the slaves were to disobeyed their master they would get 100 slashes if necessary. Epps would have the slaves pick cotton. The average pounds picked by slaves were 200 pounds and whoever didn’t meet the average would get whipped. Northup would usually not meet the quota so he would usually take part of those lashings. Epps would lash out at slaves when he didn’t get what he wanted. His wife would also beat one of the slaves because of jealousy towards her.

Now of course most slave owners are not usually that mean when it comes to the way they treat their slaves. According to Faust before Hammond took the course of action to beat the slaves for their disobedience, he began to kind of give slaves some of what they asked for. After he took away their churches and they failed to join the white churches he began to become more lenient and allowed a traveling minister just for slave services. ‘For a number of years he hired itinerant ministers for Sunday afternoon slave services.’ He would also imply a system of reward for those who did well in their task, instead of not getting any gratitude. Of course he would still punish those that failed in their duties but I suppose it is a start. ‘Hammond seemed not so much to master as to manipulate his slaves, offering a system not just of punishments, but pf positive inducements, ranging from contests to single out the most diligent hands, to occasional rituals of rewards for all, such as Christmas holidays; rations of sugar, tobacco, and coffee; or even pipes sent to all adult slaves from Europe when Hammond departed on the Grand Tour’ So as you can see sometimes some slave owners would be kinder to their slaves than most other slave owners.

In 12 Years a Slave, this is evident as well. Northup, during the slave auction gets sold to a plantation owner named William Ford. Ford tries to convince the seller to give sell him the daughter of a woman he was buying just to keep their family together but the man wouldn’t budge after Ford practically begged for her. Once they all arrive to the farm, Northup works and shows his ingenuity by impressing Ford with a waterway that will help transport logs quickly and cheaper. Ford’s carpenter, John Tibeats, said that couldn’t work and when it did he quickly resented Northup for it. One day he began to harass Northup and they both got into a scuffle in which Northup won but Tibeats threatened him. Ford’s overseer Chapin came and told him to stay on the plantation because if he left he would not be able to protect him. Tibeats came back later with two of his friends and began to lynch Northup. That’s when Chapin came and rescued him from the three men and warned them with a gun pointing at them saying that Ford held a mortgage on Platt and if they hung him, he would lose that money. He then told them to leave Northup and run away. Chapin then left Northup on his tip toes just so the noose won’t wring his neck all day until Ford came and rescued him. That night, Ford keeps Northup in the house protecting him and tells him in order to save his life he has given his debt to Epps. This shows how tender and nice some slave owners were compared to some cruel slave owners like Epps.

Elizabeth Keckley

Elizabeth Keckley’s life was an eventful one. Born a slave in Dinwiddie Court-House, Virginia, from slave parents, she did not have it easy, as her early years were crowded with incidents.

She was only four year old when her mistress, Mrs. Burwell delivered a beautiful black-eyed baby, whose care was assigned to Elizabeth, a child herself. This task didn’t seem very hard to her, as she had been educated to serve others and to rely much on herself. If she met Mrs. Burwell’s expectations, it would be her passport into the plantation house, where she could work alongside her mother, who did most of the cooking and sewing in the family. Trying to rock the cradle as hard as she could, she dropped the baby on the floor and immediately panicked, attempting to pick it up with the fire-shovel, until her mistress came into the room and started screaming at her. It was then that she received her first lashing, but that would not be her last punishment. It was the one she would remember most, though.

At seven years old, Elizabeth saw a slave sale for the first time. Her master had just acquired the hogs he needed for the winter and didn’t have enough money for the purchase. To avoid the shame of not being able to pay, he decided to sell one of his slaves, little Joe, the cook’s son. His mom was kept in oblivion, in spite of her suspicions. She was told little Joe was coming back the next morning, but mornings passed and his mother never saw him again.

By the time she was eight, the Burwell family had four daughters and six sons, with a large number of servants. She didn’t see much of her father, as he served a different master, but it was also because they were enable to be together as a family only twice a year, at Christmas and during the Easter holidays. Her mother, Agnes, was thrilled when Mr. Burwell made arrangements for her husband to come live with them, and little Lizzie, as her father used to call her, was ecstatic to finally have her family together. That only lasted until Mr. Burwell came on one fine day bringing with him a letter saying that her father had to leave to the West with his master, where he had decided to relocate. And that was the last time she ever saw her dad.

Another memory that Elizabeth could not shake was the death of one her uncles, another slave of Mr. Burwell’s. After one day, he lost his pair of plough-lines, but Colonel Burwell offered him another, a new one, and told him he would be punished if he lost those too. But a couple weeks later his new pair got stolen and he hung himself for fear of his master’s reaction. It was Lizzie’s mother that found him the next morning, suspended by one of the willow’s solid branches, down by the river. He chose taking his life over the punishment from his master.

Because they didn’t have any slaves of their own, at 14 Lizzie was separated from her mom and given as a chore girl to her master’s oldest son, who lived in Virginia. His wife was a helpless, morbidly sensitive girl, with little parenting skills. Reverend Robert Burwell was earning very little money, so he couldn’t afford to buy Elizabeth, only to benefit from her services thanks to his father. Living with the minister, she had to do the work of three people, and they still didn’t find her trustworthy. By the time she was 18, Elizabeth had grown into a proud, beautiful young woman. It was around that time that the family moved to Hillsboro, in Northern Carolina, where the minister was assigned a church of his own.

Mr. Bingham, the school principal, was an active member of the church and a frequent visitor of the church house. He was a harsh, pitiless man who became the mistress’s tool in punishing Lizzie, as Mrs. Burwell was always looking for vengeance against her for one reason or another. Mr. Burwell was a kind man, but was highly influenced by his wife and took after her behavior fairly often. One night, after Elizabeth had just put the baby to sleep, Mr. Bingham told her to follow him to his office , where she was asked to take her clothes off because he was going to whip her. Then she did something that no slave had ever done. She refused. She dared him to give her a reason or otherwise he would have to force her, which he did. She was too proud to give him the pleasure of seeing her suffer, so she just stood there like a statue, with her lips firmly closed, until it was over. When he finally let her go, she went straight to her master and asked for an explanation, but Mr. Burwell didn’t react in any way, and only told her to leave. When she refused to go, the minister hit her with a chair. Lizzie couldn’t sleep that night, and it wasn’t from the physical pain, but more from the mental torture she had suffered. Her spirit stoically refused this unjust behavior and as much as she tried, she couldn’t forgive those who had it inflicted it upon her.

The next day all she wanted was a kind word from those who had made her suffer, but that didn’t happen. Instead, she continued to be lashed on a regular basis by Mr. Bingham, who convinced Mrs. Burwell it was the right thing to do to cure her pride and stubbornness. Lizzie continued to resist him, more proud and defiant every time, until one day he started crying in front of her, telling her she didn’t deserve it and he couldn’t do it anymore. He even asked Lizzie for forgiveness and from that day on he never hit one of his slaves ever again.

When Mr. Bingham refused to perform his duty anymore, it was Mr. Burwell’s turn to do it, urged by his jealous wife. Elizabeth continued to resist though, and eventually her attitude softened their hearts and they promised to never beat her again, and they kept their promise.

Sadly, this kind of event was not the only thing that caused her pain during her residence at Hillsboro. Because she was consider fair-looking for one of her race, she was abused by a white man for more than four years, when she got pregnant and gave birth to a boy, the only child she ever had. It wasn’t a child that she wanted to have, because of the society that she was part of, as a child of two races would always be frowned upon and she didn’t want him to suffer like she did.

The years passed and many things happened during that time. One of Elizabeth’s old mistress’s daughters, Ann, married Mr. Garland, and Lizzie went to live with them in Virginia, where she was reunited with her mother. The family was poor and couldn’t afford a living in Virginia, so Mr. Garland decided to move away from his home to the banks of Mississippi, in search of better luck. Unfortunately, moving didn’t change anything, and the family still didn’t have the resource needed to make a living. It got to a point where they were considering putting Agnes, her mother, out of service. Lizzie was outraged by the idea that her mom, who was raised in this family and grew up to raise their children years later and love them as her own, would have to go work for strangers. She would have done anything to prevent this from happening. And she did. She convinced Mr. Garland to let find someplace to work to help the family and to keep her mother close to her. It wasn’t hard to find work , and she soon had quite a reputation as a seamstress and dress-maker. All the ladies came to her for dresses and she never lacked clients. She was doing so well that she managed to support a seventeen-member family for almost two and a half years. Around that time, Mr. Keckley, whom she had met earlier in Virginia, and regarded with a little more than friendship, came to St. Louis and proposed to her. She refused at first, saying that she had to think about his offer, but what scared her was the thought of giving birth to another child that would live in slavery. She loved her son enormously, but she always felt it was unfair for the free side of him, the Anglo-Saxon blood that he had, to be silenced by the slave side that he was born with. She wanted him to have the freedom that he deserved. After thinking about it for a long time, he decided to go to Mr. Garland and asked him the price she should pay for her freedom and her son’s. he dismissed her immediately and told her to never say such a thing ever again, but she couldn’t stop thinking about it. With all the respect the had for her master, she went to see him again and asked him what was the price she had to pay for herself and her son to be free.

He finally gave in to her requests and told her that 1200$ was the price of her freedom. This gave her a silver lining to the dark cloud of her life and with a perspective of freedom she agreed to marry Mr. Keckley and start a family with him. But years passed and she couldn’t manage to save that amount of money because her duties with the family were overwhelming and she didn’t leave much time for anything. Also, her husband, Mr. Keckley, proved to be more of a burden than a support for her and the boy. Meanwhile Mr. Garland died and Elizabeth was given to another master, a Mississippi planter, Mr. Burwell, a compassionate man who told her she should be free and he would help with anything she needed to raise the amount of money needed to pay for this freedom.

Several plans were thought through, until Lizzie decided she should go to New York and appeal to people’s generosity to help her carry out her plan. All was set; all she needed now was six men to vouch with their money for her return. She had lots of friends in St. Louis and didn’t think it would be a problem, and she easily gathered the first five signatures. The sixth one was Mr. Farrow, an old friend of hers, and she didn’t think he would refuse her. He didn’t, but he didn’t believe she would came back either. Elizabeth was puzzled that he didn’t believe in her cause, and she couldn’t accept his signature if he really thought it was the final goodbye from her. She went home and started to cry, looking at her ready-to-go trunk and at the luncheon her mother had prepared for her, believing that her dream of freedom was nothing but a dream and she and her son would die slaves, the same way they were born.

And then something happened, something she never expected. Mrs. Le Bourgois, one of her patrons, walked in and changed her world around. She said it wouldn’t be fair for her to beg strangers for money and it was the ones who knew her that should help her. She would give her 200$ from her and mother and she would ask all her friends do help Elizabeth. She was successful and rapidly managed to find the 1200$ Elizabeth needed. And that was it. Lizzie and her sixteen year old son, George, were finally free. Free to go anywhere they wanted. Free to start over and to have the life they always wanted. Free by the laws of men and by the smile of God.

Critical review II: The Dependency theory

Latin American countries have always been exposed to western influence. With its neo-liberalist stance the west encourages Latin America to open up its trade and cooperate with the west. During the 1960s many countries wanted to keep Western influence outside because they were convinced that this would negatively influence their development. A consequence of this attitude was the development of a theory that criticized the western liberalist stance; the dependency theory. Dependency theory criticizes western modernity. This critical stance can be explained by the fact that Latin America is historically exposed to the political, economic, cultural and intellectual influence of the US and its recurrent attempt to diminish US domination (Tickner 2008:736). The region, being part of the non-core, wants their understanding of global realities to be explored. (Tickner 2003b). However, the theory has been influenced by the west and thereby has lost strength. Moreover, its empirical validity has been questioned. For the above mentioned reasons the theory is hardly used anymore. It should not be forgotten, however, that the theory has some qualities which do contribute to the field of international relations. First this essay will discuss the post-colonial argument that dependency theory is not critical enough and gives an explanation why the theory is Eurocentric. Afterwards it will discuss its empirical validity and finally it addresses the theory its contribution to the field of international relations. Dependency theory originated in the 1960s in Latin America. Frank, the leading theorist, argues that due to the capitalist system, developing countries are underdeveloped and development is impossible as long as they remain in the capitalist system (Frank 1969). Opposed to Frank, Cardoso and Faletto argue that development is possible despite structural determination and that the local state has an important role, they call it associated dependent development ((1979:xi). Yet, both Frank and Cardoso and Falleto eventually argue that dependency paths need to be broken by constructing paths toward socialism. The dependency theorists are thus critical of Western liberalism. As mentioned above, dependency theory attempts to criticize western-modernity. Post-colonialism, however, argues that dependency theory is not counter-modernist and not critical enough. Post-colonial theorists, in contrast to many conventional theorists, state that attention to colonial origins is needed to get a better understanding of the expansion of the world order (Seth 2011). Dependency theory does pay attention to its colonial origins. For example, Frank argues that dependency occurred historically through slavery and colonialism and continues today through western dominance of the international trading system, the practices of multinational companies and the LDC’s reliance on Western aid (Frank 1969). However, post-colonialism does criticize the way how dependency theorists address the colonial origins. The first critique stems from the fact that the homogenizing and incorporating world historical scheme of dependency ignores, domesticates, or transcends difference. It does not take into account the differences in histories, cultures and peoples (Said, 1985: 22).The second critique stems from the fact that to get a sufficient understanding of the emergence of the modern international system, it should not be examined how an international society that developed in the West radiated outwards, but rather the ways in which international society was shaped by interactions between Europe and those it colonized. (Seth 2011:174). Pos-colonalists, however, argue that dependency theory is Eurocentric. Dependency theorists are not aware of the way in which culture frames their own analysis. While trying to look at imperialism from the perspective of the periphery, dependistas fail to do this (Kapoor 2002:654). For dependistas the ‘centre’ continues to be central and dominant so that the West ends up being consolidated as sovereign subject. Dependency’s ethnocentrism appears in its historical analysis as well. Dependistas use the way how capitalism developed in Europe as a universal model that stands for history and see developing countries as examples of failed or dependent capitalism. (Kapoor 2002:654) Post colonialism thus would argue that while challenging the current capitalist system, dependency theory is not critical enough because it does not adequately address history and culture and is Eurocentric. Tickner (2008) may provide an explanation of why dependency theory is not critical enough and why it is described in terms of adherence to the capitalist system dominated by the west. IR thinking in Latin America is influenced by -among other things- US intellectual knowledge. As Tickner argues, dependency theory is not a genuine non-core theory but it is influenced by US analysts. According to Cardoso (1977) this led to severe distortions in its original contents because it local internal problems of greatest concern to social scientists in Latin America became invisible and external factors such as US intervention and multinational corporations were prioritized. Furthermore, IR thinking in that region has been influenced by conventional theories. For example, through the influence of realism in Latin America, much attention was paid to the role of the elite and furthermore, theorists were concerned with the concept of power but replaced it with a more suitable concept autonomy (Tickner 2008:742).Thus the influence of US IR knowledge and conventional theories may have contributed to the fact that dependency theory is not critical enough and has lost its influence. Although post-colonialism addresses dependency’s problem – that is does not sufficiently addresses culture because of its sole focus on capitalism – it should be noted that not only culture is not taken into account but a whole range of other factors which could help explain underdevelopment are left out the theory as well. This shortcoming clarifies why dependency theory is empirical invalid.For example, dependency does not address local, physical, social or political forces that might have had a role in the inability to generate industrial development and does not acknowledge that imperialism is only partly responsible for underdevelopment (Smith 1981). As Smith argues; ”dependency theory exaggerates the explanatory power of economic imperialism as a concept to make sense of historical change in the south” (1981:757). Smith rightly points out that in some instances dependistas do recognize the influence of local circumstances but this is only so to reaffirm ultimately the overriding power of economic imperialism. Moreover, the theory does not pay any attention to the positive effects that the contact with the international system can provide for developing countries. Thus, dependistas solely look at economic power and state that as long as countries remain part of the international capitalist system development is impossible. The only way to escape is to isolate from this system or if the colonizer relinquishes political power. (Kapoor 656) However, South-Korea’s development shows the invalidity of this argument. South-Korea experienced rapid growth during the 1970s despite its dependency on the US. The case of South-Korea shows that development is possible despite dependency and that dependency can have positive effects. Another case which shows dependency’s invalidity is Ghana. During the 1980s Ghana adopted dependency policies that were consistent with the denial of the relevance of the western economic principles and tried to keep western influence outside. However, instead of bringing prosperity and greater independence for the Ghanasian economy, these policies caused poverty and greater dependence on international aid or charity (Ahiakpor 1985:13). The dependency policies did not support Ghana’s development because it only focused on capitalism without taking other factors into account. The theory thus has significant shortcomings which explains its loss of influence. However, it should be noted that the theory has some qualities as well. Despite its homogenizing and Eurocentric history, the theory is aware that history is an important factor. The advantage of dependency’s structural-historical perspective is that broad patterns and trends can be recognized. Moreover, it allows one to learn from past mistakes to change the future (Kapoor 2002:660). On top of that, as Wallerstein argues ”One of the crucial insights and contributions of dependency is the conceptualization of the unicity of the world system” (1974:3). It clearly describes how the world is incorporated in the capitalist system and it shows the importance of economic considerations on political issues. Furthermore, it addresses the importance of local elites and foreign companies to the internal affairs of weak states whereby it provides an analytical framework. (Smith 1981:756). For example, Cardoso and Faletto argue that if there is no strong local state, the ruling elite may ally themselves with foreign companies pursuing their own interests and this way upholding dependency and underdevelopment. This analysis is applicable to Congo and shows for example that Congo’s development is restrained because a strong local state is absent and the local elite allies with foreign companies pursuing their own interests (Reno 2006). In short, dependency theory has lost influence because – from a post-colonial perspective – it is not counter-modernist and not critical enough. It does not adequately address history and culture and is Eurocentric. Moreover, it is empirically invalid because it solely focuses on capitalism without taking other factors into account. In contrast to what dependistas argue, dependent development is possible within the capitalist system and the proposed isolation from this system may even worsen the economy instead of bringing prosperity. However, the theory did influence foreign policymakers and analysts not only in Latin America but also in Africa between the 1960s and the 1980s and it is still useful because it provides an analytical framework on which other analysts can elaborate.

Data Acquisition and Analysis – Curve fitting and Data Modelling: college application essay help

1 Part 1: Track B: Linear Fitting Scheme to Find Best-Fit Values

Introduction

In a linear regression problem, a mathematical model is used to examine the relationship between two variables i.e. the independent variables and dependent variable (the independent variable is used to predict the dependent variable).

This is achieved by using the least square method using excel plotting data values and drawing a straight line to derive the best fit values. A nonlinear graph was obtained on plotting the data points provided but the least square method applied obtained a linear graph with a straight line which estimated and minimized the squared difference between the total sum of the data values and model values.

Aim

The aim of this coursework Part1, Track B is to carryout data analysis assessment with respect to linear model fitting by manual calculation working obtaining the best-fit values with the decay transient data provided in table 1 below which is implemented using excel.

Time (sec) Response (v)

0.01 0.812392

0.02 0.618284

0.03 0.425669

0.04 0.328861

0.05 0.260562

0.06 0.18126

0.07 0.1510454

0.08 0.11254

0.09 0.060903

0.1 0.070437

Table 1.1: data for decay transient

Methodology

The data in the table 1.1 above represents decay transient which can be modelled as an exponential function of time as shown below:

V(t)=V_0 exp'(-t/??) ”””””””””””””’ (1.1)

The equation above is nonlinear; to make it linear the natural logarithm method is applied

Logex = Inx ”””””””””””””””. (1.2)

From the mathematical equation of a straight line

Y = mx + c”””””””””””””””. (1.3)

Y = a0 + a1x + e””””””””””””””. (1.4)

Y = Inx”””””””””””””””’… (1.5)

In this case

Y = InV”””””””””””””””’… (1.6)

So,

InV = InV0 + Ine(-t/??) ””””””””””””’. (1.6)

But Inex = x; eInx = x

InV = InV0 ‘ t/”””””””””””””’??. (1.7)

Applying natural logarithm method to the equation obtained two coefficients InV0 and t/?? which represent a0 and a1 respectively from equation (1.4)

The normal equation for a straight line can be written in matrix form as:

[‘(n&’[email protected]’xi&”xi’^2 )] {‘([email protected])} = {‘(‘[email protected]’xiyi)} ”””””””””.””. (1.8)

This is separated to give

[‘(‘(1&1)&’&[email protected]'(xi&x2)&’&xn)] [‘(‘([email protected])&'([email protected])@’&’@1&xn)]{‘([email protected])} = [‘(‘(1&1)&’&[email protected]'(x1&x2)&’&xn)] [‘(‘([email protected])@’@yn)] ””’.””. (1.9)

From (1.9) the general linear least squares fit equation is given as

[[Z^T ] [Z]]{A}= {[Z^T ] {Y}}

The main purpose is to calculate for {A} (InV0 and t/??) which are the coefficients of the linear equation in (1.7). Matrix method using excel is used to achieve this by finding values for [Z],[Z^T ],[Z^T ][Z],[Z^T ][Y],’and[[Z^T ] [Z]]’^(-1).

The table below shows the calculated values for InV when natural logarithm was applied to the response V using excel.

Table 1.2: Excel table showing calculated values for InV

[Z]= [‘(1&[email protected]&[email protected]&[email protected]&[email protected]&[email protected]&[email protected]&[email protected]&[email protected]&[email protected]&0.1) ]

The transpose of [Z] is given as

[Z]= [‘(1&1&1&1&1&1&1&1&1&[email protected]&0.02&0.03&0.04&0.05&0.06&0.07&0.08&0.09&0.1)]

The product of [Z^T ][Z] is given as

[Z^T ][Z] = [‘(10.0000&[email protected]&0.0385)]

The inverse [[Z^T ] [Z]]^(-1) of the matrix [Z^T ][Z] is given as

[[Z^T ] [Z]]^(-1) = [‘(0.4667&[email protected]&121.2121)]

The product of the transpose of [Z], [Z^T ] and [Y] (InV) is given as

[Z^T ][Y] = [‘([email protected])]

To obtain {A} the product of [[Z^T ] [Z]]^(-1) and [Z^T ][Y] was calculated to give

{A} = [‘([email protected])]; {A}= [‘([email protected]/??)]

Where;

InV0 = a0; and – 1/?? = a1

{A} = [‘([email protected])]

So,

InV0 = 0.0626; and 1/?? = -28.8434

V0 = exp(0.0626) = 1.0646; and ?? = 1/-28.8434 = 0.03467

Then,

V(t)=1.0646exp'(-t/0.03467)

Since Y = InV(t)

Y = 1.0646exp'(-t/0.03467)

Table 1.3: Excel table showing calculated values for InV [Y] and InV(t) [Y]

Table 1.4: Diagram of Excel calculation for curve fitting

Figure 1.1: Diagram of transient decay for response V and response V(t)

Figure 1.2: Diagram of transient decay for response V and response V(t)

Conclusion

The solutions to this linear regression exercise was achieved by manually calculating the generalized normal equation for the least square fit using the matrix method and Microsoft excel program obtaining the unknown values of the coefficients V0 and ??. The method provides accurate results and the best fit values obtained show the relationship between the straight line response and the transient line response shown in figure above and is seen to have given

2 Part 2: Track B: Type K thermocouple

INTRODUCTION:

The thermocouple is a sensor that is used to measure temperature and is commonly used in many industries. For this lab work, a Type K thermocouple is used to acquire a first order transient data (non-linear) response to a temperature change, a signal conditioning element called the AD595 thermocouple amplifier is used to improve thermocouple signal since it produced a low voltage output proportional to the input temperature, a data acquisition device the NI-USB 6008 is used to acquire signals from the signal conditioning circuit, a resistor-capacitor (RC) low-pass filter is built for reduce the frequency and noise of the signal generated and further investigations and analysis are carried out. In this part, the non-linear regression was used to obtain the transient response of thermocouple signal using Labview program

Aim

The aim of the assignment is to produce a Labview program which can obtain transient real data values from a Type K thermocouple which is a sensor that produces voltage by the differential temperature its conductors sense (i.e. a first order response) followed by a non- linear model fitting procedure which allows the user capture the thermocouple initial first order response to a rising input temperature and an appropriate response function model to the transient response. The program displays the transient data, fitted model response and calculated model parameters.

Reason of the choice of model response

The model output transient response obtained from the input (temperature) of the Type K thermocouple was a first order can be defined as having only s to the power of one in the first order transfer function which is characterised with no overshoot because the order of any system is determined by the power of s in the transfer function denominator and has a transfer function as 1/(??s+1) for a unit step signal of 1/s in the Laplace transform domain or s domain and is given as

Y(s) = 1/(s(??s+1)) ””””””””””””””’… (2.1)

Partial fraction method is used to find the output signal Y(t) to give

Y(s) = A/s+B/(??s+1) ””””””””””””””’ (2.2)

A = [s * Y(s)] for s = 0 = 1

B = [(??s+1) * Y(s)] for s = (-1)/?? = -1

Y(s) = 1/s-1/(??s+1) ””””””””””””””’ (2.3)

Therefore the output signal in the time domain is given as:

Y(t) = L-1 [1/s-1/(??s+1)] ”””””””””””””’ (2.4)

Y(t) = U(t) – e^((-t)/??) ”””””””””””””’ (2.5)

Y(t) = 1- e^((-t)/??) where t’0 ””””””””””” (2.6)

Substituting the output response V(t) for Y(t) , the equation (2.6) can also be re-written as:

V(t) = 1- e^((-t)/??) ”””””””””””””’ (2.7)

Assuming for a given input temperature T0 an output response V0 was derived and for a given increase in temperature T1 an output response voltage V1 was derived, so therefore the output voltage for the change in voltage relative to the change in temperature is given as:

V(t) =(V1 ‘ V0)( 1- e^((-t)/??) ) ”””””””””””. (2.7)

V(t) =V0 + (V1 ‘ V0)( 1- e^((-t)/??) ) (thermocouple voltage response) ”””’. (2.8)

V(t) = (V1 ‘ V0)( 1- e^((-t)/??) ) + V0 ””””””””””. (2.9)

Where (V1 ‘ V0) = ??V

V(t) = ??V ( 1- e^((-t)/??) ) + V0 ”””””””””””. (3.0)

T(t) = ??T ( 1- e^((-t)/??) ) + T0”””””””””””. (3.1)

T(t) = a ( 1- e^((-t)/b) ) + c

Equation (3.1) is similar to the general nonlinear model given as:

F(x) = a ( 1- e^((-t)/b) ) + c ”…””””””””””’. (3.2)

Where,

F(x) = V(t) ; a = ??V; b = 1/??; and c = V0

The thermocouples voltage output is nonlinear given a first order response curve (Digilent Inc., 2010)

V(t)

0 (-t)/??

Table 2:1: showing thermocouple output voltage first order response curve

Explanation on the principles of non-linear regression analysis

The principle of non-linear regression is a method that can be used to show how the response and the unknown values (predictors) relate to each other by following a functional form i.e. relating Y as being a function of x or more variables. This is to say that the non-linear equation we are trying to predict rely non-linearly mainly upon one or more variables or parameters. The Gauss Newton Method is a method used to solve non-linear regression by applying Taylor series expansion to express a non-linear expression in a linear for form.

A non-linear regression compared to a linear regression cannot be manipulated or solved directly to get the equation; it can be exhausting calculating for this as an iterative approach is used. A non-linear regression model is given as:

Yi = f(xi, a0, a1) + ei

Where,

Yi = responses

F = function of (xi, a0, a1)

ei = errors

For this assignment the non-linear regression model is given as

f(x) = a0(1 ‘ e-a1x) + e

Where,

F(x) = V(t) ; a0 = ??V; b = (-t)/??; and e = V0

T(t) = (T1 ‘ T0)( 1- e^((-t)/??) ) + T0 ””””””””””. (3.3)

V(t) = (V1 ‘ V0)( 1- e^((-t)/??) ) + V0 ””””””””””. (3.4)

Where (V1 ‘ V0) = ??V

V(t) = ??V ( 1- e^((-t)/??) ) + V0 ”””””””””””. (3.5)

Description of measurement task

The intent of the measurement experiment was to seek, identify, analyse the components of that make measurement task and to examine the transient response from the Type k thermocouple. It analyses the method, instruments and series of actions in obtaining results from measurements. Equipment such as the Type K thermocouple, NI-PXI-6070E 12 bit I/O card, AD595 thermocouple amplifier, and a Labview software program which was used for calculation of model parameters were used to carry out this task.

The measurement task is to introduce the Type K thermocouple sensing junction in hot water to analyse the temperature change and the corresponding voltage response is generated and observed on a Labview program. This activity is executed frequently to acquire the best fitted model response and parameters.

Choice of signal conditioning elements

The choice of a signal conditioning element used in measurement is important because the signal conditioning element used can either enhance the quality and efficiency of a measurement system or reduce its performance.

The AD595 thermocouple amplifier is the choice signal conditioning element used with the Type K thermocouple for this experiment because it is has a built in ice-point compensation (cold junction compensator) and amplifier which is used as a reference junction to compare and amplify the output voltage of the Type K thermocouple which generates a small output voltage corresponding to the input temperature. AD595AQ chip used is pre-calibrated by laser trimming to correspond to the Type K thermocouple characteristic feature with an accuracy of ??3oC, operating between (-55 oC – 125 oC) and are available with 14 pins/low cost cerdip. The AD595 device resolves this issue by providing amplification of low output voltage (gain), linearization of the nonlinear output response of the thermocouple so as to change to the equivalent input temperature, and provide cold junction compensation to improve the performance and accuracy of thermocouple measurements.

Equipment provided for measurement

There were three equipment provided for the measurement exercise and they include:

Type K thermocouple

The Type K thermocouple (chromel/alumel) is the most commonly used transducer to measure temperature with an electromotive force (e.m.f) of 41 microvolts per degree(??V/ oC) which is nonlinear and the voltage produced between its two dissimilar alloys changes with temperature i.e. the input temperature corresponds to the output voltage it generates. It is cheap to buy with the ability to perform in rugged environmental conditions and is calibrated to operate at wide temperature range of about -250 oC to 1370 oC. It is made of a constituent called nickel which is magnetic and its magnetic component may change direction or deviate when subjected to a high enough temperature and can affect its accuracy.

Signal connector signal conditioner

Thermocouple

Cold junction ‘([email protected])

Figure 2.2: Circuit diagram of a thermocouple, signal connector and signal conditioner

NI-USB 6008

The NI-USB 6008 is a National Instrument device that provides DAQ functionality for some applications like portable measurements, data logging and lab experiments. It is cheap for academic purposes and is used for more complicated measurement tasks. It has a ready to run data logger software which allows the user to perform quick measurements and can be configured by using Labview National Instruments software. It provides connection to 8 analog single-ended input channels (AI), 2 analog output channels (AO), 12 data input and output (I/O) channels, 32 bit counter bus with a very quick USB interface and are compatible with Labview 7.x, LabWindowsTM/CVI, and Measurement Studio DAQ modules for visual Studio.

Figure 2.3: NI-USB 6008 pin out

AD595 thermocouple amplifier

The AD595 thermocouple amplifier is a thermocouple amplifier and a cold junction compensator on a small chip of semiconductor material (microchip or IC) which produces a high output of 10 mV/oC from the input signal of a thermocouple as a result of combining a cold junction reference with a pre-calibrated amplifier. It has an accuracy of ??10C and ??30C for the A and C performance grade version respectively and can be powered by a supply including +5V and a negative supply if temperatures below 00C are to be measured. It laser trimmer is pre-calibrated so as to conform to the Type k thermocouple specification and is available in 14-pin side brazed ceramic dips (Devices, 1999).

Figure 2.4: Block diagram showing AD595 in a functional circuit

Configuration of the I/O channel(s)

The I/O channel(s) provide a way (path) for communication between the input device (thermocouple sensor) and the output device (DAQ). The thermocouple senses temperature as input and sends the data to the DAQ which receives the data and displays the information through a computer on the Labview front panel graph.

The following explain the configuration of the DAQ for the thermocouple measurement:

Channel Settings: this is used to select an I/O channel in the DAQ device either AI0 or AI1 can be chosen and rename to suit user

Signal Input Range: this is used to select the minimum and maximum voltage of the AD595 thermocouple amplifier which also helps to achieve better resolution of the NI-USB 6008 Data Acquisition Device

Scale Units: this is used to select the scale unit of the analog signal generated and since the thermocouples output signal measured corresponding to temperature is Voltage, then the Scaled unit chosen will be ‘Volts’

Terminal Configuration: this is used to choose terminal on the DAQ for which the signal conditioning circuit is connected

Custom Scaling: No scale was chosen since no custom scale was adopted

Acquisition Mode: this is used to select how the sample are played, the continuous samples was chosen because it allows the DAQ to collect data signals continuously from the circuit until the user decides to stop

Samples to Read: this allows the user to choose how many samples to read depending on the frequency of the input signal generated. It is important to choose at least twice the frequency signal to acquire all the desire signals. One thousand (2K) samples was chosen

Rate (Hz): this allows the user to choose the rate of the sample signals generated. Rate (Hz) 1k was chosen.

Connection of Circuit to DAQ and Configuration of I/O channel(s)

The connection of the circuit to the NI-USB 6008 data acquisition device was carried out by connecting two wires from the output voltage and ground of the signal conditioning unit i.e. the AD595 device.

The red wire from the signal conditioning unit was connected to the positive port of the analog input channel 0 (+AI0) of the DAQ device and the black wire from the ground was connected to the negative port of the analog input channel 0 (-AI0) of the DAQ. The diagrams below show the connections between the signal conditioning circuit and the connector block (DAQ).

Figure 2.5: Picture showing the connection of the signal conditioning circuit with the DAQ

Description of the Labview VI

Labview is a National instrument programming system design software that is optimal for control measurements and provides an engineer with tools to test, solve practical problems in a short amount of time, and the design of control systems. It is less complex and very easy to use compared to other programming simulation applications. The Labview Virtual Instrument (VI) program includes the Front Panel and the Block diagram and for this lab experiment, it is used to examine and determine the thermocouple frequency response and to carry out an analysis of the noise present in the filtered and unfiltered signal of the thermocouple voltage generated, displaying the result on its Graph indicators. The description of the Labview VI Front Panel and Block diagram are as follows:

Figure 2.6: Block diagram of the Labview design

Block diagram:

It is where a user can create the basic code for the Labview program. The program can be created when the block diagram is active by using the functions palette which contains objects like structure, numeric, Boolean, string, array, cluster, time and dialog, file I/O, and advanced functions which can be added to the block diagram.

Front Panel:

It is a graphic user interface that allows the user to interact with the Labview program. It can appear in silver, classic, modern or system style. The controls and indicators are located on the controls palette and used to build and add objects like numeric displays, graphs and charts, Boolean controls and indicators etc. to the front panel.

DAQ Assistant:

It allows a user to configure, generate and acquire analog input signals from any one of the data acquisitions (DAQ) input channel. For this experiment the signal conditioning circuit was connected to the analog input 0 channel of the NI-USB 6008 data acquisition device and the DAQ assistant is used to configure to DAQ so as to be able to acquire signals from the AD595 thermocouple amplifier.

For Loop:

The For loop like the while loop allows codes to be executed repeatedly by executing a subdiagram a required number of times (N). The For loop is found on the structure palette and can be placed on the block diagram. It is made up of the count and iteration terminal.

Trigger and Gate VI:

It is used to take out a part (segment) of a signal and its mode of operation is either based on a start or stop condition or can be static.

Curve Fitting VI:

It is used to calculate and determine the best fit parameter values that best depict an input signal. It can be used for linear, non-linear, spline, quadratic and polynomial models type. It minimizing the weighted mean squared error between the initial and best fit response signal generated. For this experiment initial guesses were made for the coefficients of non-linear model used.

Graphs:

It is a type of special indicator accepts different data types and used to display an array of input data or signals. In this case a waveform graph was used.

Numeric Function:

It is used to carry out mathematical and arithmetic actions on numbers and converts numbers from one data type to another. The multiply numeric function was used to return the products of inputs.

Figure 2.7: Configuration of Trigger and Gate

Figure 2.8: Curve fitting configuration

The diagram in figure 2.8 above is a window showing the configuration for curve fitting and the configuration steps are as follows:

Model Type: Non-linear model was chosen because the signal observed is a first order response (non-linear) curve.

Independent variable: t was the independent variable chosen

Maximum iterations: The default maximum iterations 500 is chosen.

Non-linear model: The equation for the non-linear model a*( 1- exp'(-t/b)) ) + c

Initial guesses: Values for a, b, and c were chosen to get the best fitting values for the curve. The values for a, b, and c are 15.000000, 0.040000, and 29.000000 respectively.

Figure 2.9: Transient response of the thermocouple for best fit 1st measurement

Figure 3.0: Transient response of the thermocouple for best fit 2nd measurement

Figure 3.1: Transient response of the thermocouple for best fit 3rd measurement

The diagrams in figures 2.9, 3.0 and 3.1 above show the transient response of the thermocouple after being inserted in warm water to get the best fit curve i.e. to replicate the actual thermocouple response curve. It is observed that with the use of the Trigger and Gate Express VI, the delay experienced in the in the three graphs were removed making the thermocouples signal response more appropriate providing a more decent best fit curve result. Carrying out multiple measurements to get the best fit curve reduces the time constants and produces a better response curve compared to taking one measurement. The table below shows the results from the three measurement activities with their residual and mean squared error values.

Model Parameters First Measurement(1st) Second Measurement(2nd) Third Measurement(3rd)

a (0C) 21.4671 10.2373 8.60708

b (sec) 0.0232065 0.039578 0.0432934

c (0C) 32.1068 29.661 29.4745

Residual 0.666461 0.0357227 0.124069

Mean Squared Error 0.431833 0.0181012 0.0227711

Table 2.1: Showing results from the three measurements with best fit parameters mean squared error and Residual for curve fitting.

From Table the second measurement was observed to have the best fitting curve with the minimum residual and Mean Squared error that are closest to zero compared to the 1st and 3rd measurements. The best fit parameter results of the second measurement will be inserted in the non-linear model equation which is given as:

y = a*( 1- exp'(-t/b)) ) + c””””””””””””(3.6)

T(t) = ??T ( 1- e^((-t)/??) ) + T0””””””””””””…(3.7)

Where,

a is the change in temperature of thermocouple (??T)

b is the time constant (??)

c is the initial temperature of the thermocouple (T0)

t is the time

Substituting values for a, b, and c of third measurement in the equation

T(t) = 10.2373( 1- exp'(-t/0.039578)) ) + 29.661”’..”””””..(3.8)

Equation 3.7 above is the non-linear model equation with best fit parameters for the thermocouple response signal at every value of time (t).

To achieve the output voltage response, the change in temperature and the initial temperature values from equation 3.7 need to be converted to volts this can be obtained by dividing the temperature value by 100 since 100oC is equal to 1V. So the resulting output voltage of the thermocouple is as follows:

V(t) = ??V ( 1- e^((-t)/??) ) + V0”’..””””””””””'(3.9)

Where,

a = ??V; b =1/??; and c = V0

V(t) = 0.102373( 1- exp'(-t/0.039578)) ) + 0.29661””””””’.(4.0)

Conclusion

The curve fitting experiment using the Type K thermocouple, the AD595 signal conditioning device, and the NI-USB 6008 to acquire and display signals using Labview software was carried out successfully. Curve fitting of the transient response signal of the thermocouple was achieved by analysing and obtaining the transient response of the thermocouple and using a non-linear model implementing best fit values to replicate the response curve by subjecting the thermocouple to temperature. This can be used to obtain the behaviour of an input response signal and improve efficiency for control systems.

References

Cimbala, J. M., 2013. Dynamic System Response. [Online]

Available at: https://www.mne.psu.edu/me345/Lectures/Dynamic_systems.pdf

[Accessed 10 March 2014].

Devices, A., 1999. Analog Devices. [Online]

Available at: http://www.analog.com/media/en/technical-documentation/data-sheets/AD594_595.pdf

[Accessed 2 March 2015].

Digilent Inc., 2010. Introduction to First Order Responses. [Online]

Available at: http://www.digilentinc.com/Classroom/RealAnalog/text/Chapter_2p4p1.pdf

[Accessed 6 March 2014].

here…

Job evaluation

Job evaluation is defined as a method for determining the worth of a job in comparison to the different jobs in the organization. To establish a justified pay structure for all the employees of the organization, job evaluation gives a means to compare the quality of the work in a particular job, in other words, the worth of a job.

It is different from job analysis; rather job evaluation is done after the stage of job analysis in order to obtain some information about the concerned jobs. Job analysis is defined as a process of determining the skills, duties and responsibilities, in a systematic way, required for a particular job. Thus job evaluation is a method which commences from job evaluation from job analysis but it ends at a point where the worth of the job is determined by ensuring internal as well as external pay equity. In this competitive business environment, it is essential to maintain pay equity otherwise the organization may lose its crucial talent.

Equity:

Overpayment Inequity (Positive Equity):

Underpayment Inequity (Negative equity):

Where,

Input: Any value that person brings to a job.

Outcome: any sort of benefit that an employee is awarded from his/her job.

Objectives of job evaluation

‘ To build a systematic, reasonable, deliberate structure of jobs based on their worth to the organization.

‘ To support a current pay rate structure or to build up one that accommodates internal equity.

‘ To support in setting pay rates that are tantamount to those of in comparable jobs in different organizations to contend in market place for best talent.

‘ To give a balanced premise to arranging pay rates when bargaining collectively with a recognized union.

‘ To guarantee the reasonable and fair remuneration of workers in connection to their obligations.

‘ To guarantee equity in pay for jobs of comparable efforts, responsibility, efforts and working conditions by utilizing a framework that reliably and precisely surveys contrasts in relative quality among jobs.

‘ To create a system of techniques to determine the grade levels and the resulting pay range for new jobs or the jobs which have advanced and changed.

‘ To distinguish a ladder of progression for future development to all workers who are interested in enhancing their remuneration.

‘ To comply with equal pay legislation and regulations deciding pay contrasts as indicated by job content.

‘ To add to a base for merit or performance-related pay.

Characteristics of job evaluation

The essential goal of job evaluation is to figure out the value of work; however this is a quality which differs occasionally and from spot to place affected by certain economic pressure. The principle features of job evaluation are:

‘ To supply bases for compensation arrangement established on realities as opposed to on dubious moderate thoughts.

‘ It endeavors to assess jobs, not individuals.

‘ Job evaluation is the yield given by job analysis.

‘ Job evaluation does not plan pay structure, it helps in supporting the framework by decreasing number of separate and diverse rates.

‘ Job evaluation is not made by people rather it is carried out by gathering of specialists.

‘ Job evaluation decides the estimation of job. Further the estimation of each of the perspectives, for example, aptitude and obligation levels are additionally related and concentrated on regarding the job.

‘ Job evaluation helps the administration to keep up abnormal amounts of representative gainfulness and worker fulfillment.

Process of job evaluation

Job analysis describes the skills, duties and responsibilities required for a job. Job evaluation adds to an arrangement for contrasting jobs regarding those things the association considers vital determinants of job worth. This procedure includes various steps that will be quickly expressed here and afterward talked about all the more completely.

1. Job Analysis: The primary step is an investigation of the jobs in the association. Through job analysis, data on job substance is acquired, together with a valuation for worker prerequisites for effective execution of the job. This data is recorded in the exact, steady dialect of a job description.

2. Compensable Factors: The following step is choosing what the association “is paying for” – that is, the thing that variable or elements put one job at a more elevated amount in the job chain of importance than an alternate. These compensable elements are the measuring sticks used to focus the relative position of jobs. As it were, picking compensable components is the heart of job evaluation. Not just do these variables spot jobs in the association’s job progressive system, yet they additionally serve to advise job officeholders which commitments are remunerated.

3. Building up the Method: The third venture in job evaluation is to choose a technique for evaluating the association’s jobs as indicated by the factor(s) picked. The technique ought to allow reliable situation of the association’s jobs containing a greater amount of the elements higher in the job progression, than those jobs lower in the progressive system.

4. Job Structure: The fourth step is contrasting jobs with build up a job structure. This includes picking and relegating chiefs, arriving at and recording choices, and setting up the job progression.

5. Pay Structure: The last step is evaluating the job structure to land at a compensation structure.

Merits of job evaluation

Job evaluation is a procedure of deciding the relative worth of a job. It is a procedure which is useful actually for encircling remuneration arranges by the personnel manager. Job evaluation as a methodology is worthwhile to an organization from multiple points of view:

‘ Decrease in disparities in pay structure – It is discovered that individuals and their inspiration is needy upon how well they are being paid. Accordingly the primary target of job evaluation is to have outer and interior consistency in compensation structure so that imbalances in pay rates are lessened.

‘ Specialization – Because of division of work and subsequently specialization, an expansive number of endeavors have landed hundred positions and numerous representatives to perform them. Hence, an endeavor ought to be made to characterize a job and accordingly settle pay rates for it. This is conceivable just through job evaluation.

‘ Aides in choice of representatives – The job evaluation data can be useful at the time of determination of applicants. The elements that are resolved for job evaluation can be considered while selecting the workers.

‘ Amicable relationship in the middle of workers and administrator – Through job evaluation, agreeable and harmonious relations can be kept up in the middle of representatives and administration, so that a wide range of pay rates debates can be minimized.

‘ Institutionalization – The procedure of deciding the pay differentials for distinctive jobs get to be institutionalized through job evaluation. This aide in bringing consistency into compensation structure.

‘ Pertinence of new jobs – Through job evaluation, one can comprehend the relative estimation of new jobs in a worry.

Demerits of job evaluation

‘ In spite of the fact that there are numerous methods for applying job evaluation in an adaptable way, fast changes in innovation and in the supply of and interest for specific aptitudes, make issues of change that may need further study.

‘ At the point when job evaluation brings about considerable changes in the current pay structure, the likelihood of executing these progressions in a generally brief time may be limited by the money related breaking points inside which the firm needs to work.

‘ At the point when there is an extensive extent of motivating force workers, it might be hard to keep up a sensible and worthy structure of relative profit.

‘ The methodology of job rating is, to some degree, vague on the grounds that a portion of the components and degrees can be measured with precision.

‘ Job evaluation takes quite a while to finish, requires specific specialized staff and quite expensive.

Methods of job evaluation

Job Ranking:

As indicated by this technique, jobs are arranged from highest to lowest, in place of their worth or legitimacy to the organization. Jobs can likewise be organized by relative trouble in performing them. The jobs are analyzed in general instead of on the premise of essential considers the job; the job at the highest priority on the rundown has the most astounding quality and clearly the job at the base of the rundown will have the least esteem. Jobs are typically positioned in every division and afterward the office rankings are joined to build up an authoritative positioning. The variety in installment of compensations relies on upon the variety of the way of the job performed by the workers. The positioning technique is easy to comprehend and practice and it is ideally equipped for a little association. Its straightforwardness however attempts to its inconvenience in huge associations on the grounds that rankings are hard to grow in an extensive, complex organization. Besides, this sort of positioning is very subjective in nature and may outrage numerous workers. In this way, a more investigative and productive method for job evaluation is called for.

Job Classification:

As per this system, a predetermined number of job groups or job classes are built and jobs are allotted to these characterizations. This technique spots gatherings of jobs into job classes or job grades. Separate classes may incorporate office, administrative, administrative, work force, and so on.

Class I – Executives: Further order under this classification may be Office Manager, Deputy Office administrator, Office director, Departmental chief, and so forth.

Class II – Skilled workers: Under this classification may come the Purchasing partner, Cashier, Receipts assistant, and so forth.

Class III – Semiskilled workers: Under this classification may come Stenotypists, Machine-administrators, Switchboard administrator and so on.

Class IV – Unskilled workers: This classification may involve peons, delivery people, housekeeping staff, File agents, Office young men, and so forth.

The job reviewing system is less subjective when contrasted with the prior positioning strategy. The framework is straightforward and adequate to pretty much all representatives without a second thought. One in number point for the system is that it considers all the elements that a job involves. This framework can be viably utilized for a mixed bag of jobs. The shortcomings of the Grading technique are:

‘ Actually when the prerequisites of distinctive jobs contrast, they may be joined into a solitary class, contingent upon the status a job conveys.

‘ It is hard to compose all inclusive descriptions of a grade.

‘ The system distorts sharp contrasts between diverse jobs and distinctive evaluations.

‘ At the point when individual job depictions and grade portrayals don’t match well, the evaluators tend to characterize the job utilizing their subjective judgments.

The problems that foreign workers face in a host country: essay help online

According to the latest figures CSO (Central Statistical Office), the number of foreign workers in Mauritius has been constantly increasing and is now at approximately 39, 032. From those figures we can state that there are 27, 408 men and 11 624 women. This amount of expatriates is mainly made up of workers coming from Bangladesh, 18 429, from India, 9105, China, 4656 and Madagascar with 3596.

It is the manufacturing sector that employs the largest number of foreign workers, that is29 846, while the construction comes second with 6070 workers. Last September, the ministry of Labour took the decision to freeze the recruitment of foreign workers in the construction sector. In all case, the bar of 40 000 foreign workers in Mauritius will quickly be reached by the end of the year announcing an increase of 20% compared to 2008. Those workers are supposed to be treated the same way as local workers and to take advantage of the local welfare.. Though Mauritius did not signed ICRMW (International Convention on the protection of the Rights of All Migrant Workers), the country needs to apply its own laws. Here, it is the Employees Rights Act (2008) which stipulates all the law concerning any work related issues. This causes the migrant workers to rebel in order to voice out through violent actions. Those persons are actually migrating to another country in order to get a better living conditions, also, promises are made but rarely respected. From the point of view of management, they prefer employing migrants as they are more hard-working, skilled and cheaper compared to a local worker. It is a fact that it is not always easy for the foreign workers to cope with the working conditions applied to them and moreover their cultures and those that we have in Mauritius are not always the same.

RESEARCH OBJECTIVES

The research objectives of this study are:

‘ To explore the different difficulties that the expatriates face in the host country

‘ To explore the foreign workers’ opinions about the way they are treated

‘ Propose recommendations to those organizations who employ foreign workers to improve their working and living conditions.

LITERATURE REVIEW

Wilson & Dalton (1998) describe expatriates as, ‘those who work in a country or culture other than their own.’

Connerly et al. (2008) stated that ‘many scholars have proposed that personal characteristics predict whether individuals will succeed on their expatriate assignment.’ Due to globalization, there is the need for expatriates. Thus, companies dealing with external workers should find ways to solve problems faced by these workers and make them comfortable in their daily life so that they do not want to go back to their native country. (Selmer and Leung, 2002)

PROBLEMS FACED BY EXPATRIATES

1. Culture Shock

Hofstede (2001) defined it as ‘the state of distress following the transfer of a person to an unfamiliar environment which may also be accompanied by physical symptoms’.

According to Dr Kalvero Oberg, expatriates are bound to experience four distinct phases before adapting themselves to another culture.

Those four phases are:

1. A Honeymoon Phase

2. The Negotiation Phase

3. An Adjustment Phase

4. A Reverse Culture Shock

During the Honeymoon Phase, the expatriates are excited to discover their new environment. They are ready to lay aside minor problems in order to learn new things. But eventually, this stage ends somehow.

At the Negotiation Phase or the Crisis Period, the expatriates start feeling homesick and things start to become a burden for them. For example, they might feel discomfort regarding the local language, the public transport systems or the legal procedures of the host country.

Then the Adjustment Phase starts. Six to twelve months after arriving in the new country, most expatriates start to feel accustomed to their new home and know what to expect. Their activities become routines and the host country is now accepted as another place to live. The foreign worker starts to develop problem-solving skills to change their negative attitudes to a more positive one.

The final stage called the Reverse Culture Shock or Re-entry Shock. It occurs when expatriates return to their home country after a long period. They are surprised to find themselves encountering cultural difficulties.

There are physical and psychological symptoms of culture shock such as:

1. Physical factors

‘ Loss of appetite

‘ Digestion problems

2. Cognitive factors

‘ Feeling of isolation/ home sickness

‘ Accusing the host culture for own distress

3- Behavioural factors

‘ Performance deficits

‘ Higher alcohol consumption

2. Communication/ Language Barrier

Communication is crucial to both management and employees. Sometimes, due to this language barrier, employees do not understand what is expected from them. They thus tend to make mistakes at the workplace and conflicts arise between the parties concerned. Also, the language barrier is the major obstacle when it comes to the changing of environment for expatriates. These people usually feel homesick and lonely as they are unable to communicate with the local people they meet. This language problem becomes a barrier in creating new relationships. A special attention must be paid to one’s specific body language signs, conversation tone and linguistic nuances and customs. Communicative ability permits cultural development through interaction with other individuals. Language becomes the mean that promotes the development of culture. Language affects and reflects culture just as culture affects and reflects what is encoded in language. Language learners may be subconsciously influenced by the culture of the learned language. This helps them to feel more comfortable and at ease in the host country.

3. Industrial Laws

Laws are vital for the proper conduct of such activity as that of welcoming expatriates in Mauritius. The Ministry of Labour, Industrial Relations and employment are meant to produce a ‘Guidelines for work permit application (February 2014)’ manual for those organisations which are engaged in this activity. This manual describes procedures that should be followed in case of a Bangladeshi, Chinese or Indian worker. Any breach of the law will lead automatically to severe sanctions.

For example, Non-Citizens (Employment Restriction) Act 1973 which provides, among others, that ‘a non-citizen shall not engage in any occupation in Mauritius for reward or profit or be employed in Mauritius unless there is in force in relation to him a valid work permit.’ A request to the government should be made if an organization wishes to recruit foreign workers in bulk.

Expatriates are human beings so they should have some fundamental ‘Rights’ in the host country. In Mauritius, the contract for employment of foreign workers stipulates all the necessary information concerning the expatriates’ rights, conditions of work, accommodation and remuneration amongst others. This contract is based on the existing Labour law and the contents of the contract are to large extent the same with some slight differences in conditions of work. Mauritius has adopted the good practices in relation to labour migration and has spared no efforts to develop policies and programmes to maximize benefits and minimize its negative consequences. However, there are still improvements to be brought to the living and working conditions of foreign workers.

4. Living Conditions

The NESC published a report in February 2007 which advocated that foreigners working in the island should enjoy the same rights as local workers. In reality, many foreign workers suffer from bad working conditions. Some workers have intolerable living conditions, sleeping in dormitories on benches without mattresses or in tiny bedroom containing many people. Those who intend to voice out or those considered as ‘ring leaders’ are deported.

In 2006, some workers from China and India who tried to form a trade union or to protest were deported. Peaceful demonstrations often turn into riots which the police brutally suppressed.

In August (2007) some 500 Sri Lankans were demanding better wages at the company Tropic Knits and in response, the Mauritian authorities deported 35 of them. At the Companie Mauricienne de Textile, one of the biggest companies of the island employing more than 5,000 people, 177 foreign workers were deported after taking part in an ‘illegal demonstration’ about the lack of running water, the insufficient number of toilets and poor accommodation.

During the year (2011) a visit at two of the Trend Clothing’s Ltd by (Jeppe Blumensaat Rasmussen) shows how several Bangladeshi workers were living under inhumane living conditions. Furthermore, workers were paid an hourly rate of Rs15.50, which was even less than the previous amount of Rs16.57 bringing the monthly salary to an amount of Rs3500 ‘ 5000 depending on overtime. One woman even said that she had work 43 hours and was paid only 32. The Bangladeshi workers were also living in dormitories containing several holes in the ceilings and signs of water damage next to electrical sockets. We also have the case of the migrant Nepal’s workers who decided to leave Mauritius due to their bad working and living conditions. They were living in an old production space which was turn into dormitories, hosting 34 migrants’ workers. There were no water connection in the kitchen and no running water to flush toilets. They also did not receive any allowance and their salary was decreased from Rs5600 to Rs5036 per month. On the 9th June 2011, eight workers wrote a letter to their boss, giving their employer a month notice.

In September (2013) more than 450 Bangladeshi workers working in the textile company, Real Garments, in Pointe-aux-Sables were on strike claiming for better working conditions. They also protested in the streets of Port-Louis and had gone to the Ministry of Labor to submit their claims one day before (L’Express; Mauritius). Fourteen Bangladeshi workers were considered as the main leaders and were deported by the authorities.

5. Foreign workers and Income

Foreign workers take the decision to leave their native country to work in other countries with the aim of making more money and then send it to their family. However, they did not predict that they will be paid less than that was promised to them before their departure to the host country. Some organisations pay them only half of what they were supposed to. Having already signed their contract, they are forced to work hard for a low salary. Many Bangladeshis, Indians or Chinese choose to leave the host country after their contract termination and the more courageous ones stay and renew their contract for more years. Despite their low paid jobs, it is still better than in their native country where they are even more exploited or where life is far more difficult for them.

The Employment Rights Act (2008) stipulate that if a local worker ‘works on a public holiday, he shall be remunerated at twice the national rate per hour for every hour of work performed.’

However, some expatriates who are forced to work on a public holiday are usually paid the same amount as a usual day. As human beings, they should be treated as any other worker either local or foreign, with the same rights and possibilities.

6. Unions

Unionism is about workers standing together to improve their situation, and to help others. Some unions are reactive, that is waiting for the employer to act and then choosing how to attack or respond and others are proactive, that is developing their own agenda and then advancing it wherever it’s possible. When unions and management fail to reach agreement, or where relations break down, the union has the option of pursuing industrial action through a strike, a go-slow, a work-to-rule, a slow-down, an overtime ban or an occupation.

However, the Expatriates are often not aware of the law protecting their rights (e.g. many migrant workers are not informed of the laws that provide them with the same level of protection as Mauritian nationals and Employers refused to recognize union representatives. It is also often difficult for unions to get access to and organize for foreign workers.

An ICFTU-AFRO (the African regional organization of the former International Confederation of Free Trade Unions) mission to Mauritius in February 2004 was told that the few men they saw were mainly supervisors who were said to be hostile to unions.

During 2006 there were a series of reports that workers from China and India who had tried to form a trade union or protest against their employers had been summarily deported. On 23 May 2006, policemen armed with shields and truncheons beat female workers from Novel Garments holding a sit-in in the courtyard of the factory in Coromandel protesting against plans to transfer them to other production units.

According to the MAURITIUS 2012 HUMAN RIGHTS REPORT, section 7: Worker Rights, a. Freedom of Association and the Right to collective Bargaining, the constitution and law provide for the rights of workers, including foreign workers, to form and join independent unions, conduct legal strikes, and bargain collectively With the exception of police, the Special Mobile Force, and persons in government services who were not executive officials, workers were free to form and join unions and to organize in all sectors, including in the Export Oriented Enterprises (EOE), formerly known as the Export Processing Zone.

Alzheimer's disease (AD): college essay help online

Alzheimer’s disease (AD) is the most common cause of dementia and chronic neurodegenerative disorder among the aging population. Dementia is a syndrome characterized by progressive illnesses affecting memory, thinking, behavior and everyday performance of an individual. Dementia affects older people, but 2% of people starts developing before the age of 65 years (Organization 2006). According to the Worlds Alzheimer Report 2014, 44 million of people are living with dementia all across the globe and its set to get doubled by 2030 and triples by 2050 (Prince, Albanese et al. 2014). Its estimated that 5.2 million Americans have AD in 2014 (Weuve, Hebert et al. 2014). This includes 200,000 individuals under 65 age have early onset of AD and 5 million people of age 65 and above (Weuve, Hebert et al. 2014). Women are affected more than men in AD and other dementias (Weuve, Hebert et al. 2014). Among 5 million people of above 65 years of age, 3.2 million are women and 1.8 million are men (Weuve, Hebert et al. 2014). The Multiple factors that leads to AD are age, genetics, environmental factors, head trauma, depression, diabetes mellitus, hyperlipidemia, and vascular factors. There are no treatments for AD that slows or stops the death and malfunctioning of neurons in the brain, indeed many therapies and drugs are aimed in slowing or stopping neuronal malfunction (Association 2014). Currently five drugs have been approved by the U.S food and Drug Administration to improve symptoms of AD by increasing the amount of neurotransmitters in the brain (Association 2014). It has been estimated that Medicare and Medicaid covered $150 billion of total health care for long duration care for individuals suffering for AD and other dementias (Association 2014).

Diagnostic criteria

Neurological and Communicative Disorders and Stroke’Alzheimer’s Disease and Related Disorders Association (NINCDS’ADRDA) in 1984 proposed a criteria which is as follows (1) clinical diagnosis of AD could only be designated as ‘probable’ while the patient was alive and could not be made definitively until Alzheimer’s pathology had been confirmed post mortem (McKhann, Drachman et al. 1984) and (2) the clinical diagnosis of AD could be assigned only when the disease had advanced to the point of causing significant functional disability and met the threshold criterion of dementia (McKhann, Drachman et al. 1984).

In 2007, IWG proposed criteria that AD could be recognized in vivo and independently of dementia in the presence of two features (Dubois, Feldman et al. 2007). The first criteria was a core clinical that require evidence of a specific episodic memory profile characterized by a low free recall that is normalized by cueing (Dubois and Albert 2004). The second is the presence of biomarker evidence on AD which include (1) structural MRI, (2) Neuroimaging using PET (18F-2-fluoro-2-deoxy-D-glucose PET [FDG PET] or 11C-labelled Pittsburgh compound B PET [PiB PET]), and (3) CSF analysis of amyloid ?? (A??) or tau protein (total tau [T-tau] and phosphorylated tau [P-tau]) concentrations (Dubois, Feldman et al. 2007)

In 2011, the NIA and Alzheimer’s association proposed guidelines to help pathologist and categorizing the brain changes with AD and other dementias (Hyman, Phelps et al. 2012). Based on the changes absorbed, they classified into three stages (a) preclinical Alzheimer’s disease, (b) mild cognitive impairment (MCI) due to Alzheimer’s disease, (c) Dementia due to Alzheimer’s disease (Hyman, Phelps et al. 2012). In pre-clinical AD, the individual have changes in the cerebrospinal fluid but they don’t develop memory loss. This reflects that Alzheimer’s related brain changes occur 20 years onset before the symptom occurs (Petersen, Smith et al. 1999, H??nninen, Hallikainen et al. 2002, Reiman, Quiroz et al. 2012). In MCI due to AD, individuals suffering from MCI has some notable changes in thinking that could be absorbed among family members and friends, but do not meet criteria for dementia (Petersen, Smith et al. 1999, H??nninen, Hallikainen et al. 2002, Reiman, Quiroz et al. 2012). Various studies show that 10 to 20% of individual of age 65 or above have MCI (Petersen, Smith et al. 1999, H??nninen, Hallikainen et al. 2002, Reiman, Quiroz et al. 2012). Its is estimated that 15% and 10% progress from MCI to dementia and AD every year (Manly, Tang et al. 2008). In Dementia due to AD, Individual is characterized by having problem in memory, thinking and behavioral symptom that affects his routine life (Association 2014).

In 2014, IWG proposed criteria for maintaining the principle of high specificity, based on the framework they classified as follows (1). Typical AD can be diagnosed in the presence of an amnestic syndrome of the hippocampal type, which could be associated with different cognitive or behavioral changes and having one of following changes in vivo AD pathology such as decreased A??42 together with increased T-tau or P-tau concentration in CSF or increased retention on amyloid tracer PET (Dubois, Feldman et al. 2014). (2) Atypical AD could be made in the presence of the following, which includes clinical phenotypes that is consistent with one of the known atypical presentation and at-least one of the following changes indicating in-vivo AD pathology (Dubois, Feldman et al. 2014). (3) Mixed AD can be made in patients with typical or atypical phenotypic feature of AD and presence of at-least one biomarker of AD pathology (Dubois, Feldman et al. 2014). (4) Preclinical states of AD require absence of clinical symptoms of AD (typical or atypical phenotypes) and inclusion of at-least one biomarker of AD pathology for identifying the presence of asymptomatic at-risk state or the presence of a proven AD autosomal dominant mutation of chromosome 1, 14 or 21 for the diagnosis of presymptomatic change (Dubois, Feldman et al. 2014). (5) To differentiate biomarkers of AD diagnosis from those of AD progression (Dubois, Feldman et al. 2014).

Neuropathology

Dr. Alois Alzheimer, a German physician in 1906 observed pathologic abnormalities in autopsied brain of women who suffered from memory related problems, confusion and language trouble (Prince, Albanese et al. 2014). He found the presence of plaques deposits outside the neurons and tangles inside the brain cells (Prince, Albanese et al. 2014). Thus, the senile plaques and neurofibrillary tangles have became two pathological hallmarks of AD (Prince, Albanese et al. 2014).

The histological hallmarks of AD in brain are intracellular deposition of microtubule-associated tau protein called neurofibrillary tangles (NTF) and extracellular accumulation of amyloid ?? peptide (A??) in senile plaques (Bloom 2014). A?? derived from the larger glycoprotein called amyloid precursor protein (APP) can processed through two pathways amyloidogenic and non-amyloidogenic (Gandy 2005) . In amyloidogenic pathway ??-secretase and ??-secretase proteolysis APP to produce soluble amyloid precursor protein ?? (sAPP??) and a carboxyl terminal fragment CTF?? (C99) to produce A?? peptides (Gandy 2005). Alternatively APP is proteolysed by the action of ?? and ??- secretase generating soluble amino terminal fragments (sAPP??) and a carboxyl terminal fragment CTF?? (C83) to produce non amyloidogenic peptide (Esch, Keim et al. 1990, Buxbaum, Thinakaran et al. 1998).

Figure 1. Amyloidogenic and non-amyloidogenic pathways of APP

APP is cleaved by ??-?? secretases (amyloidogenic) releasing amyloid A?? peptide(s) or by ??-?? secretases (non-amyloidogenic), adapted from (Read and Suphioglu 2013)

The amino acid sequences of A?? include A??42 and A??40. During normal condition A??40 is 10-fold higher concentration level, when compared to A??42 central nervous system (CNS) (Haass, Schlossmacher et al. 1992). However during inflammation, stress and injury in the brain causes A??40 and A??42 for a dynamic change and leads to an upregulation of A??42. In AD A??42 accumulates as misfolded proteins in extracellular space (Gurol, Irizarry et al. 2006).

Tau is a microtubule-associated protein (MAP), most abundant in central and peripheral nervous system that help in assembly and stabilizing of microtubules that is crucial among the cellular morphology and trafficking (Tolnay and Probst 1999, Iqbal, Liu et al. 2010, Cohen, Guo et al. 2011). NFT is the major hallmarks of AD patients in brain. In AD, phosphorylation of tau leads to the loss of neuronal function and death. Degeneration of synapse strongly correlates with cognitive decline in AD, while soluble oligomeric tau contribute to synapse degeneration (Morris, Maeda et al. 2011). Although, the protein aggregating into NFT are unclear, number of NFT and the progression of neurodegeneration as well as dementia showed a significant positive correlation in AD (Cohen, Guo et al. 2011) (Arnaud, Robakis et al. 2006).

Figure 2. AD pathology

Deposition of A?? and tau in neurons. The boxes shows the different biomarkers which are used for examination, adapted from (Nordberg 2015)

Biomarkers

A characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic process or pharmacologic responses to a therapeutic intervention (Atkinson, Colburn et al. 2001).The pathology of neurodegenerative for individuals is provided by using imaging and fluid biomarkers (Dickerson, Wolk et al. 2013).

CSF Biomarkers

The CSF biomarkers play a major role in diagnosing probable AD. However, abnormality in the CSF is found long before the symptoms occur.

Amyloid beta (A??) is synthesized in brain and diffused into CSF. In cognitively normal individuals A?? appears in moderate condition, however for individuals suffering from AD has reduced A??42 in CSF which act as an useful biomarker during diagnosis (Sunderland, Linker et al. 2003). The low levels of A??42 appears at-least 20 years prior to clinical dementia in individuals with familial AD mutations (Ringman, Coppola et al. 2012). In addition, reduced levels of A??42 appear early in cognitively normal which precedes MCI by years (Fagan, Head et al. 2009). Therefore A??42 cannot be used individually as a specific biomarkers in discriminating from other dementia hence it should be combined with other biomarkers for determining specific dementia.

Tau in CSF relates with the progression of tau related pathology in cerebral cortex. Increased in the tau level in CSF for AD patients reflects the neuronal loss in brain (de Souza, Chupin et al. 2012). Similarly, like A??42 elevation in tau seems to occur at cognitive normal individuals (Fagan, Head et al. 2009). Hence its important to consider other biomarker for differential diagnosis of AD. Moreover, phosphorylated (p)-tau have 85% sensitivity and 97% specificity in discriminating AD from other neurological disorder (Tan, Yu et al. 2014). P-tau is therefore more superior to t-tau in differentiating diagnosis, thus helps in overcoming the short coming of A??42 and t-tau in differentiating diagnosis (Buerger, Zinkowski et al. 2002). CSF t-tau and p-tau occurs after A??42 initially aggregates and increases as amyloid accumulates (Buchhave, Minthon et al. 2012).

Imaging Biomarkers

Structural MRI

Structural MRI studies helps in subjects diagnosed with AD and MCI who consistently show change in atrophy in entorhinal cortex and hippocampus of medial temporal lobe (MTL) and cortical thinning in AD signature region are the MRI sign of emerging AD (Du, Schuff et al. 2001). MRI studies focus on normal subjects who have maternal history of AD, has reduced volume of MTL and precuneus (Berti, Mosconi et al. 2011). Voxel based analysis on whole brain determines the structural MRI could be used to identify the presence of brain atrophy in cortical regions up to 10 years before clinical symptoms of AD, with greater impact in MTL (Du, Schuff et al. 2001).

Positron Emission Tomography (PET)

PET is based on the principle of spontaneous emission of positron by the nuclei of unstable radionuclide, whose number of protons exceeds that of electrons (Granov, Tiutin et al. 2013). PET images in-vivo distribution of radiopharmaceutical substances with higher resolution and sensitivity (Fahey 2003). The positron which is a ??-particle with positive charge annihilates with an electron of negative charge, releasing equal number of gamma photons of same energy (511 keV) moving in 180 degree opposite to each other to conserve momentum (Kukekov and Fadeev 1986, Fahey 2003).

The components involved in the PET scanner are movable bed, detector, gantry and computer. The detector consist of multiple crystals attached with a photomultipliers (Granov, Tiutin et al. 2013). The interaction among the gamma photon and crystal produces scintillation which induces electric impulse in the photomultipliers and could be detected and processed using computer (Khmelev, Shiryaev et al. 2004). If the two detectors are in coincidence, then the positron emitted along the line connects the detectors which is termed as line of response (LOR) (Fahey 2003).In most of the scanners the two detectors are in coincidence, if they are detected with in 10 seconds (Fahey 2003). The sensitivity of the PET can be increased by increasing the number of detectors into a ring. The data examined from the individual is acquired in computer in the form of sinogram. There are different techniques of reconstruction such as filtered back projection (FBP), Iterative Method, OSEM are used for reconstructing an image. In modern PET scanners, LSO crystals with minimum size are used which permits high resolving capacity, high resolution, effective algorithm for image reconstruction and field of view sufficient for single stage scanning of the brain or heart (Granov, Tiutin et al. 2013).

The cyclotron, a particle accelerator provides the production of radionuclides for clinical use. Heavy particles are accelerated to a higher energy level of 5-100MeV using cyclotron (Granov, Tiutin et al. 2013). The beam of particles is focused on the target substance by using radio magnetic lens. The target material is bombarded with heavy particle to generate the required radionuclide (Granov, Tiutin et al. 2013).

The requirements of a good tracer which include higher affinity towards the target receptor, selectivity versus other receptors (Bmax / Kd of at least 10-fold,where Bmax is the density of the receptor and Kd is the concentration of the radiotracer) and good permeability (McCarthy, Halldin et al. 2009). The tracers has to be a poor substrate of p-glycoprotein if it has been developed for imaging targets in brain (Terasaki and Hosoya 1999). It has been found that low hydrogen bonding plays an important role in predicting good PET tracers (McCarthy, Halldin et al. 2009). For a good tracers, time to binding equilibrium should be long relative to washout of non-specifically bound tracer, but short relative to isotope decay (McCarthy, Halldin et al. 2009) .

Amyloid PET

PET imaging of amyloid binding agent Pittsburg compound B (PET-PiB) helps to determine the ??-amyloid (A??) and its distribution over the brain that were previously restricted to postmortem studies. The longitudinal study provided evidence relating with a direct relationship between PET-PiB and likelihood of conversion from clinical diagnosis of MCI to AD over three years (Klunk 2011). Since there is significant overlap between amyloid imaging and CSF- A??42, researchers attempt to address the areas where these two biomarkers may be equivalent and areas where one measurement could hold unique advantages (Vlassenko, Mintun et al. 2011). In addition, current hypothesis states that higher amyloid burden assessed by florbetapir 18F (18F-AV-45) amyloid PET is related with lower memory performance among clinically normal older subjects (Sperling, Johnson et al. 2013).

FDG-PET

FDG-PET (2-deoxy-2[18F]fluoro-D-glucose) is one of the neurodegeneration biomarkers included in the new research criteria proposed for the various diagnosis of AD by the International working group (IWG) in 2007 and 2010, also in the new diagnostic criteria of AD by National Institute of Aging-Alzheimer Association (NIA-AA) (McKhann, Drachman et al. 1984, Dubois, Feldman et al. 2007, Dubois, Feldman et al. 2014). FDG-PET measures the local glucose metabolism for neuronal activity at resting state to asses cerebral function. It is evident that AD individuals has reduced FDG uptake predominantly in tempoparietal association areas, precuneus and posterior cingulate region (Minoshima, Giordani et al. 1997). These changes could be observed in subjects from 1-2 year before the onset of dementia and are closely related to cognitive impairment (Herholz 2010). Although MRI is more sensitive in detecting and monitoring hippocampal atrophy (Fox and Kennedy 2009), FDG is more sensitive in detecting neuronal dysfunction in neocortical association areas. Hence FDG is well suited for monitoring the progression of the disease syndrome (Alexander, Chen et al. 2002).

Regional functional impairment of glucose metabolism in AD is related with the severity and progression of different cognitive deficits (Langbaum, Chen et al. 2009)

INDIAN NATIONALISM (1757-1947)

Great Britain had colonized the nation of India amid the 1700’s when East India organization picked up control of India in 1757 however the Company ruled India without impedance from British Government until 1800s With the measure of crude materials and the developing business for British products, the British government starts to build its control. In 1858, British government takes complete control of India after the Sepoy Mutiny and the British subdued and displayed bigotry against local Indians. Indian nationalistic developments, for example, ones drove by the Indian National Congress, had endeavored endeavors at lead toward oneself yet had never been entirely effective. The immense supporter of a free India, Gandhi, was influential in the Indian Pro-independence Movement. Known as the Mahatma, or the Great Soul, Gandhi constrained change and an end to British colonization through a strict approach of peacefulness, or detached resistance. This development picked up energy after the world war 1 however the llianwala Bagh Massacre where number of individuals had gathered at Jallianwala Bagh in Amritsar for going to the yearly Baisakhi reasonable were encompass by the armed force at the requests of General Dyer and opened fire on the swarm, slaughtering several individuals. The Aftermath of this slaughter brought about general hubbub when the swarms took to the roads in numerous north Indian towns. The British utilized ruthless suppression, trying to embarrass and threaten individuals. Individuals were flagellated and towns were besieged and this savagery constrained Gandhi to stop the development

A feeling of solidarity and patriotism was motivated by history and fiction, folktale and melodies, prevalent prints and images. Abanindranath Tagore’s picture of Bharat Mata and Bankim Chandra melody Vande Mataram united numerous individuals and groups During the Swadeshi Movement; a tri-shading (red, green and yellow) banner was outlined. It had eight lotuses speaking to eight regions of British India and a sickle moon, speaking to Hindus and Muslims In 1921, Gandhi had planned the tri-shading Swaraj banner (red, green and yellow) with the turning wheel at the focal point. This banner spoke to the Gandhian perfect of self improvement and turned into an image of resistance. This ingrained pride and united the Indians.

However Despite the impact of Gandhi, India fell into confusion. Hindu individuals needed an all-Hindu state and Muslims, drove by the Muslim League needed a different state. Gandhi was killed in light of this contention. In the end, Pakistan was framed as a different Muslim state. Along these lines, the quality and will of the basic individuals both attained to Indian autonomy and shredded India. The tale of Mahatma Gandhi and Indian patriotism is one of history’s most prominent ironies

PAN AFRICAN NATIONALISM

Soon after the end of World War II, most European countries were sometime during closure magnificent control of Africa. Skillet Africanism got to be overwhelming on the mainland of Africa. Container Africanism is a nationalistic development that requires the solidarity of all African countries. While is has immense impact, for example, the African National Council, it has never succeeded in uniting all of Africa. Difference and a hefty portion of the issues confronting Africa since the end of WWII into present-day can be faulted for European colonialism. Political defilement is uncontrolled in light of the fact that European colonialists left without making stable governments. Ethnic pressure exists in light of the fact that European fringes were made with no idea given to the tribal framework. Tribalism is one of the greatest impediments to Africa in light of the fact that conventional adversaries were contained inside one European-made outskirt. A decent sample of ethnic strain is the contention between the Hutus and Tutsis in which 1,000’s on both sides were slaughtered and numerous more fled to Zaire to look for shelter. Both the countries of Rwanda and Burundi had noteworthy populaces of Hutus and Tutsis, both customary tribes. Notwithstanding the mind-boggling issues, there have been some significant achievements where patriotism has brought about positive change.

The principal Arab-Israeli clash set two nationalistic developments against one another. The War for Independence (1948-49) was the disappointment of the Arab world to prevent Israel from being framed as a Jewish sovereign state. This war brought about Jerusalem falling under the control of the Israelis and the end to a proposed arrangement for a free Palestinian state to be shaped. The Suez War of 1956 brought about Nasser’s Egypt losing control of the Sinai Peninsula, debilitating the dependability of the immeasurably critical Suez Canal. The Six-Day War of 1967 saw large portions of the encompassing Arab countries assault Israel and afterward continue to lose region (the challenged ranges recorded above) to Israel in under a week. The Yom Kippur War of 1973 was an Egyptian assault over the Sinai and turned into a Cold War occasion as the Americans and Soviets got to be included. Nasser’s successor, Anwar al-Sadat, (envisioned here) was the first Arab pioneer to perceive Israel as a country. For this alone, he was killed, viably finishing any endeavors at enduring peace. The contention proceeds with today.

Ghana:

During the days of empire-building, the nation now called Ghana was called the Gold Coast, an English settlement. The nationalist leader Kwame Nkrumah called on the souls of the African people by renaming the obviously imperial European “Gold Coast” to something that back to the golden age of western Africa, the Empire of Ghana. As he was a believer in the principles of Gandhi. He established autonomy for Ghana through civil defiance and passive resistance. Through the superiority and bravery of Nkrumah and the Ghanaian people, Great Britain left. To quote the words of Nkrumah, “No people without a government of their own can expect to be treated on the same level as people of independent sovereign states. It is far better to be free to govern or misgovern yourself than to be governed by anybody else . . .

Kenya:

The situation in the British colony of Kenya was similar to Ghana. The exploitation of Kenyan resources and oppression of its people were the typical traits of British domination. The path to independence, however, was radically different. Kenya’s nationalist leader, Jomo Kenyatta, initiated his movement by means of passive confrontation. However, Great Britain refused to end its imperial rule of Kenya and had confined Kenyatta for paramilitary warfare he may or may not have asked for. Irrespective, the Mau Mau, Kenyan guerilla troops, resisted British troops until Great Britain released Kenyatta and left in 1963 with Kenyatta as the prime minister of a free Kenya.

South Africa:

The circumstance in South Africa was distinctive. It had encountered colonialism, however the nation had picked up self-rule when the new century rolled over. White setters called Afrikaners had control of the South African government and had forced a social structure known as apartheid. Apartheid comprised of two social classes, upper white and lower dark. The races were kept separate and unequal, with the dark populace enduring awful ill-uses. Illustrations of this misuse incorporate pass cards for blacks just, voting rights for whites just, and isolated reservations called Home Lands.

However the most acclaimed of all African patriot pioneers Nelson Mandela talked against these segregations and began his hostile to apartheid developments. Anyhow Mandela, because of taking a stand in opposition to apartheid, was detained for a long time and not discharged until the mid 1990’s. South African president F.W. De Klerk liberated Mandela and finished the bigot convention. In 1994, South Africa had its first free race and Mandela was chosen president. Mandela and De Klerk earned the Nobel Peace Prize together for their endeavors.

Canada Current Immigration Policies: essay help online free

A policy is a plan or course of action that an organized body undertakes to guide in decision making and other matters. Immigration policies are meant to guide the immigration of people into a country for which ever reason. Canada is a country found on the northern part of North America’s continent. It has ten provinces and three territories. Canada is a constitutional monarchy and a federal parliamentary democracy headed by queen Elizabeth II. It is a bilingual state that has a diverse cultural base owing to the large influx of immigrants to the country. The country’s economy is among the world largest since it depends on its natural resources and developed trade networks.

Canada has been shaped greatly by immigration in society and culture. With Its small population and large tracts of unoccupied Canada’s immigration policy was fuelled by the need for expansion with immigrants encouraged to settle in rural areas .In the early 20th century the country began to control the flow of immigrants using policies that excluded the applicants of non Europeans .1n 1976 new laws removed the ethnic criteria and it became a destination for all from a variety of countries.

There are three categories of immigrants the family class which consists of those closely related, independent immigrants who are admitted on the basis of skill capital and labor market requirements and refugees. When applying for settlement immigration officers are instructed to give priority to family reunifications and refugees before independent job seekers with skill or capital without families. Arrivals in the family category are usually unskilled or the skills they posses do not match the community they have settled in thus disrupting the labor market. This results to economic insecurity which might create disappointment and hostility among the immigrants or among Canadians who feel threatened by the newcomers.

Canada’s immigration policy encourages dispersal of immigrants across the country. Current policy has attempted to encourage immigrants to settle in smaller communities in the less-populated province of Canada. The organizations within the society that are tasked with the formulation of immigration policies and regulations are churches, employers, organized labor groups and community-based and ethnic organizations. Many of these organizations aims is to promote family reunification and to attain financial adjustment schemes.

Canada policy is non discriminatory to ethnicity however individuals suffering from diseases that pose a danger to the public, those with no clear means of financial support or criminals and terrorists are excluded. An undetermined number of persons in this undesired category have however gained entry through back doors while others who have been admitted rightfully on short term visas choose to remain by extending the time permitted by the Canadian law. The group of those entering the country illegally has grown for the recent and has become a major challenge to the government especially at border crossings and airports. This group usually operate in low tones and are unnoticed till they try to acquire some public service which will bring them to the attention of government authorities .the government is working towards sealing any loop holes that have facilitated the admission of persons not authorized under the current regulations and legislations. Claims falsified by refugees status trying to avoid normal overseas screening and processing constitute one of the more serious problems confronting immigration officials.

In accommodating the immigrants Canada provides immigrants with language training and access to Canada’s national health care and social welfare programs. However, the immigrants in the 80s do not match the economical success of those in the 90s and many find difficulty in finding jobs according to their qualifications. Other immigrants are not fluent in either English or French to be able to exploit their degrees while other qualifications are not recognized by the country.In employment a Canadian born income rises same as those of European origin individuals unlike the non -white Canadians who receive low income rates.

The admission of highly skilled professionals to Canada from less developed countries has continued to provoke controversy since the governments of these countries where these immigrants originate complain of poaching of people they cannot afford to lose. Canada has maintained the need for freedom of movements of people in the midst of the controversy that it should not encourage the outflow of trained individuals from the regions that require there services.

For the immigrants who are seeking asylum Canada is known for having a fairly liberal policy on asylum. Any person who arrives in Canada can apply for refugee status at any border, airport, or immigration office inside the country. Anyone who arrives and claims to be a refugee Canada will look at the claim even if they are could not be as considered to be a in other countries. The process is divided into two a claim is submitted to Citizenship and Immigration Canada . CIC determines within three days whether the claim is eligible to be referred to the Immigration and Refugee Board , the body that makes the final determination as to whether the applicant will receive protected status. After a person has received refugee status, he or she can apply for permanent residency. This system has been criticized as to encourage backdoor applications and posing a threat to security since after they apply they are free to move around as they wait for their determination

The Canadian policy is divide into two parts temporary entry to the country and permanent entry. Under the temporary entry one can apply while inside the country or outside the country. While outside the one applies for a visitor visa when they wish to visit the country as a tourist or a visitor. The purpose of such a visit should be to visit relatives, to attend a business meeting, to attend a conference or convention, pleasure trip or participating in a cultural show. the second class is the student authorization or the student visa which is granted to a person who wishes to come to the country to study as an international student. The third class is the employment authorization or work permit which is granted to one who wishes to come to Canada and work for a Canadian company. It is referred to work permit visa in many countries. Under any of this classes one can apply for an extension of their visas while they are within the country. While in the country one may apply for an immigrant visa as a conventional refugee also referred to as a political asylum work permit visa as a live-in-caregiver known as a domestic help, immigrant visa of Canada as a spouse granted to an application made if one gets married in Canada while on a temporary visa and immigrant visa of Canada under humanitarian and compassionate reasons. If an individual changes the visa status this may lead to permanent immigration visa of Canada.

One can apply for permanent immigration to Canada under three categories while outside Canada. In the independent class assessment is done based on a point system. It is a very popular class also called professional class or skilled worker class. This category is based on an individual’s desire to come to Canada based on qualification, work experience and knowledge of English or French. The other class is the entrepreneur class investor class or self employed class. It is also known as business migration class. Entrepreneur class and self employed is for individuals who wish to start a business in Canada while the investor class is for those who do not wish to start a business in Canada. Applying for immigrant visa to Canada under the family class is for those who have close relatives in Canada under family sponsorship.

Canadian citizens and permanent residents may make an application to sponsor their relatives under the class of family class relatives and private sponsorship for refugees. Another application is by a permanent resident if one wishes to stay outside Canada for more than six months and wants to return. It’s called a return resident permit. A person can be granted Canadian citizenship provided he or she is a permanent resident of Canada for more than three years. When applying for proof of citizenship, also called citizenship certificate the applicant may do this while within or outside of Canada.

Canada is currently a country of choice for many people from all over the world. That may not be the case in future, especially for highly skilled people. The current policies have both positive and negative effects to the society of Canada.. Some of the positive impacts of the current policies include refocusing the federal skilled worker program, an initiative to bring in skilled trades to the country who bring with them jobs and investments.

Increased protection for caregivers who come into the country for the nanny jobs or housekeeping. Those who go into foreign countries are usually abused by their workers at times and end up working in deplorable conditions such as working for long hours without time to rest, depriving them day-offs and confiscation of vital documents such as passports for the immigrants. Some also face sexual harassment which is against the laws .The immigrants are thus faced with difficult conditions yet they cannot report or if reported they cannot get help. The current policies have therefore come in handy to protect this individuals from such torture. Permanent resident status to be granted to eligible students. The students who apply for student visas and perform exemplary well will be granted permanent residency in Canada after completion of studies. This can enable students to acquire citizenship and settle in the country after completion of studies. This ensures a retention of skilled people to work towards the growth of the economy.

The current policies have helped in addressing the current short-term labor market needs for the country because of the small population of Canada which cannot meet its labor requirements. The immigrants solve the labor situation which otherwise the country would not have addressed.

These policies have their negative sides. In the long term Canada will be viewed as no longer welcoming as it was. These include decision to wipe out immigration application backlogs legislatively. The applications of immigrants to get visas for whatever reason has been denied by immigration officers thus preventing serious developments on either the job market or education sector. A suspension or delay on family sponsorships which will deny the coming in of family to reunite with the rest of the families. This will affect the status of those who seek to migrate to Canada for the fear of being isolated from their families.

Reliance on temporary foreign workers to meet labor market needs. These has affected the attitude of the skilled workers who jet into the country and have not been able to get jobs. The Canadian citizens at times feel insecure by the immigration of the people into the country since they view them as a threat to their jobs and opportunities in the country. Hostility has been reported against the immigrants to an extent of some losing their lives. Organized crime has been witnessed against the immigrants to scare them so as to instill fear in them.

Tightened citizenship requirements which has locked out a lot of people who have genuine reasons to apply for the citizenship. Some of the requirement has locked out skilled workers and potential job creators to get into the country. Jobs would have boost the economic state of the country but due to the being looked out vast opportunities are also shut out. A list of refugees tagged as safe whose claims would be checked vigorously to determine if the claims are true. This has affected those who genuinely seek to immigrate as refugees.

Mandatory detention of asylum seekers who arrive for the fear terrorist or criminal activities especially after the 9/11 attack on the us. The asylum seekers will not be allowed to walk freely before the determination of their pending applications. This usually creates unnecessary anxiety for the asylum seekers.

These policies are made in a flashy speed and the breadth of them is likely not to be understood by the masses. The way the policies interact with each other is also an issue that will impact negatively on the society.

Conclusion

The current policies on immigration has impacted the society of Canada in both negative and positive ways. Some have been very fruitful to the growth of the economy and the cultural state of the country. The cultural state of the country has been made diverse by the different origins of the immigrants. the economic growth has been made possible by the influx of highly qualified individuals to the job market and the coming in of investors and job creators.

Canada has however been accused of poaching of the best brains from less developed and still developing countries worldwide. In its defense however it has said that there is freedom of movement for all the people.

In general the current immigration policies have helped in several ways for the betterment of the society but has introduced some problems too to the people living in Canada.

Sex Offenders in the Community

The United States government has rules in place to register the names of sex offenders, but unfortunately seems to overlook the idea of sex offenders living near children. In that respect, there is an injustice in the fact that sex offenders live on the same streets as children without parole officers making this information explicit to the parents. There are many child molesters who, even if they do have a professional job, work near minors. The government has laws, which state that a sex offender must be registered, but there are no laws saying that a sex offender cannot live around children. I do not agree with the idea that sex offenders are allowed to live in communities near children. In order to keep our children safe, child molesters should be banned from living and working near a school.

Realistically, allowing sex offenders to continue living near school systems enables them to target individuals, the majority of which are adolescents. Unknowingly, I worked with a sex offender when I was sixteen. Between the ages of sixteen and eighteen, a different sex offender targeted me. Any child could come into contact with a situation in which she is vulnerable and unaware of the danger. As a young person, one should not have to worry about whether or not he or she will be a victim of rape or sexual assault. I was fortunate enough not to be a victim, but I could have been. There was another situation where I had to stay with my grandparents for a period of time because my parents were fearful of the child molester who lived nearby. These are perfect examples of why we need laws that regulate an offender’s proximity to young children. Individuals should not have to be frightened in their daily life.

According to Understanding Child Molesters, there are a number of ways in which a sex offender may be disciplined, including probation, parole, and incarceration. When an individual decides to assault another person, there are consequences, such as having a parole officer, experiencing felony or misdemeanor changes, and registering as a sex offender’among many other methods of discipline. Even though a sex offender has to register every year, he is able to continue living in the community. This registration is compiled into an online database, but some individuals may have difficulty accessing this information due to a lack of technology. Sometimes sex offenders even have jobs where they work with minors and this should be prevented to minimize the perpetuation of a reoccurring crime.

The Washington Department of Corrections goes into further detail regarding sex offenders who live in our communities. The sex offenders must allow their parole officers to know where they live, and the parole officers must visit the sex offender regularly. Parole officers must be notified if the sex offender moves, and the parole officer must also approve of where the offender lives. From this point, sex offenders must become registered and allow the neighborhood to know that they are living within the community (‘Rules’). Registration alone is not sufficient because having their name on list will not prevent sex offenders from committing future sexual assault.

After a person becomes known as a sex offender, he must follow precise supervision. A parole officer will then monitor the offender for a period of time that is determined by the court system. Then, the offender will register as a child molester, and continue to do so indefinitely. By order of the court, he cannot leave the state. A parole officer will make a determination of whether or not the sex offender is allowed to live in a particular location. If the offender decides to move, he must also get the approval of the parole officer (‘Rules’).

The offender’s parole officer will ensure that the offender does not have possession of a computer or any other forms of media. Having possession of magazines, computers, televisions, phones, or any similar item could enable the offender to have access to pornography. The offender must also ensure not to attend any events partaking in an adults’ club. Essentially, the offender must stray from any type of pornography or sexual setting. If an offender decides to date or marry, the potential candidate must be notified of the offender’s criminal history (‘Rules’).

In addition to notifying the potential dating or marriage partner, a sex offender must also alert family and friends of the incident. Once a person becomes labeled as a sex offender, the neighborhood must be aware that there is a sex offender living amongst the community (‘Rules’). The public is only notified via a website they can visit if they choose, but this information should be presented to them more explicitly. There are many individuals who do not know how to use a computer. A parole officer should visit the neighbors to discuss safety protocol and other warnings. The offender’s address should be shared with all of the local residents, as well as individuals who find the offense report on the internet. Having the offender’s information online is not sufficient. In order to protect children, we must make better efforts to notify the community in a better way. Making sure that the public is aware of sex offenders in the community is crucial, and may save the lives of many children.

Sex offenders may be required to attend counseling sessions, for the duration of time determined by the court system. The offender must continue to update the parole officer to ensure proper attendance of the sessions. A polygraph may be used on the offender, if necessary. He is required to submit to the polygraph, as well as any drug tests that may be administered. With that being said, the offender must refrain from consuming alcohol or using drugs. Taking a polygraph and being drug-free are required to show that the offender is making changes in his life. Ideally, making these requests is to ensure that the offender will not sexually assault another child.

The offender cannot, by any means, contact the victim of the crime. Possible contact of the victim is one of the reasons as to why the offender cannot have a phone or a computer. Offenders cannot have any methods of communication with the victim, but the offenders still live in communities, near children. Since the offenders cannot contact their victims, it is essential that the offenders not be able to contact other innocent children. Seeking Justice in Child Sexual Abuse explains that, ‘Child abuse is one of the most difficult crimes to detect and prosecute, in large part because there often are no witnesses except the victim,’ (Staler 3). Unfortunately, many times when a minor is sexually assaulted, there are no witnesses. Having a sex offender near school districts enables more children to possibly be harmed and ultimately, there may not be any witnesses.

Through Civil Disobedience, Thoreau argues that breaking laws is sometimes necessary. Thoreau goes on to justify his argument, saying that breaking the law can often be the only thing that changes the mindsets of individuals. In a parallel example of Thoreau’s theory, we must break the misconception that having sex offenders living near children is perfectly acceptable. Change will not happen unless we, as a community, do something drastic to make a change happen (Thoreau).

Unfortunately, children are still placed in danger when sex offenders live near the school systems. In Martin Luther King Jr’s Letter From the Birmingham Jail, he makes a comment that his children are afraid of their surroundings. In today’s society, children are still afraid of their environment. Martin Luther King Jr. has the idea that one should break a law, if he or she deems it as ‘unjust.’ (King). I completely agree with King, and in this situation, I feel as though it is completely unjust to have sex offenders live near children. Ultimately, we cannot simply remove sex offenders from the communities, because they must live somewhere. But, as Martin Luther King Jr. was calm and rational in his approach, I believe that is the best method for the nation to make a difference.

Martin Luther King Jr. and Henry David Thoreau are very similar in the sense that they both want to take a stand for the people, and essentially, do what is morally right. They both agree that if a law is unjust, it needs to be broken. And both men stay determined to break the laws that they deem ‘unjust.’ Neither man is willing to give up on what he believes, yet both men face imprisonment for doing the right thing. If these men can be incarcerated for doing the right thing, perhaps sex offenders can have more severe punishments for doing horrendous acts to children (Thoreau, King).

Both of these men are true inspirations as to how we can handle our disagreements in a rational manner. I do not feel comfortable having sex offenders live near children. We cannot completely remove child molesters from our streets, but there are many other ways to reduce the amount of rapes and sexual abuse. The first possibility is that sex offenders stay imprisoned indefinitely. Yes, that is an unfortunate experience, but the children that are raped are emotionally scarred for the rest of their lives. So, maybe it would be rational for sex offenders to stay in prison indefinitely.

Another alternative may be to have a ban, where sex offenders cannot live within a certain radius of schools. Either way, a list of sex offenders will still be posted to notify the community. But, in my proposal, there will be more ways of warning everyone. These registries will be abundantly clear, even to those who may not have access to the existing lists. Not everyone has access to the internet, or knows how to operate a computer. Perhaps, in addition to being posted online like they are now, the lists will also be given to each homeowner in a more noticeable method. Advising the community is the first step in making this situation better. Maybe we cannot eliminate sex offenders from our streets, but we can take better precautions.

I believe that, in order to protect innocent adolescents, it is necessary to make a stand. We, as a community, should make every effort to ensure that children are not put into a situation where they are harmed. No child should be raped, sexually assaulted, or murdered. There are simple changes that this country can take at this very moment to ensure better safety of children. Law enforcement can improve methods of notifying the public that there is a sex offender present. Sex offenders can have a ban on how close they live to a school system, or they can be incarcerated indefinitely. Child sex abuse is a very serious issue that we could possibly eliminate, or reduce the number of victims.

Facebook as a learning platform

Abstract

The past decade has seen a growing popularity of social networking sites and out of all that is available, Facebook is the one that stands out for being unique and offering a range of user-friendly features. This site has frequently topped the ranks with record number of memberships and daily users. Facebook is often considered as a personal and informal space for sharing pictures, information, webpages, forming ‘Groups’, participating in discussions and debates, and providing comments on wall posts etc. The aim of this paper is to explore the use of Facebook as learning and teaching tool. It will highlight some of the theoretical debates and existing research to understand the effectiveness of this site as an informal and learner driven space, and ways in which it empowers students and stimulates their intellectual growth. The conclusion highlights the on-going contested nature of technological advances and its influences on traditional ideas of teaching and learning.

Keywords: Facebook; Situated Learning Theory; Community of Practice; Connectivist Approach; Personal Learning Environment; Informal Learning; Criticical Thinking; Creativity; Communicative Confidence; Collaborative Learning.

Introduction

Over two decades ago, theorists Jean Lave & Etienne Wenger (1991) introduced a theory of learning called ‘situated learning’ and the concept of community of practice (CoP from here on), so as to describe learning through practice and participation. The CoP can be bracketed as a group of individuals who share a common interest and a desire to learn from and contribute to the community. Wenger (2010) elaborated the idea by stating that:

Communities of practice are formed by people who engage in a process of collective learning in a shared domain of human endeavor: a tribe learning to survive, a band of artists seeking new forms of expression, a group of engineers working on similar problems, a clique of pupils defining their identity in the school, a network of surgeons exploring novel techniques, a gathering of first-time managers helping each other cope. In a nutshell: Communities of practice are groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly.

According to Wegner, the CoP needs to meet three essential characteristics i.e. domain, community and practice. The CoP has an identity defined by a shared domain of interest. Therefore, membership implies a commitment to that particular domain, and a shared competence that distinguishes members from other individuals (namely non-members). The community then becomes a way through which members can pursue interest in their domain, engage in collaborative activities and discussions, provide assistance to each other, and share or disseminate information. They build a co-operative relationship that enables them to learn from each other. Wegnner terms the members of a CoP as practitioners ‘ as they develop a shared repertoire of resources, experiences, stories, tools, and ways of addressing repetitive problems. This in short can be called a shared practice, which takes time and sustained interaction. It is the combination of these three components constitutes a CoP, and it is by developing these in parallel that one cultivates such a community (ibid).

Social networking sites are often seen as promoting CoP. In simple terms, these sites can be defined as: ‘web-based services that allow individuals to (1) construct a public or semi-public profile within a bounded system, (2) articulate a list of other users with whom they share a connection, and (3) view and traverse their list of connections and those made by others within the system.’ (Boyd and Ellison, 2008: 211). What makes social networking sites unique is not whether they allow individuals to meet new people, but rather that they enable users to articulate and make their social networks visible (ibid). Therefore, social networking can be seen as ‘the practice of expanding knowledge by making connections with individuals of similar interests’ (Gunawardena et al. 2009:4). Researchers have frequently concluded that social networking sites are at the core of what is described as online CoP (Watkins, and Groundwater-Smith, 2009).

According to Wong et al. (2011), growth in technology and social networking sites have contributed to an increase in the opportunity to operate in an improved learning environment through enhanced communication and incorporation of collaborative teaching and learning approaches. Amongst all the social networking sites, Facebook (FB from hereon) is the one that stands out the most. There are a number of reasons as to why FB can be used for building an online CoP and ways in which its features are considered as unique and suitable for Higher Education purposes:

1) Ability to create a ‘Group’: FB is equipped with dynamic features, such as, messaging, and ability to post videos, weblinks and pictures. However, Group is one of the most powerful features on the site, and it can encourage and enhance collaborative learning. Learners can create a Group or join an existing Group related to their interest, and they can use the site features for sharing information and performing variety of tasks. FB Group features can build an online CoP, as they meet the three fundamental components of communities (i.e. domain, community and practice). (ibid: 319)

2) Share information: FB features, such as, Groups, Chats and Docs enable sharing of information. Learners can form groups for a specific purpose, and post messages, have discussions/debates and share resources on a specific domain within the group. The members of a CoP are practitioners, and they can develop a shared repertoire of resources.(ibid:319)

3) Encourage collaborative tasks: ‘Docs’ feature on FB site can help with collaborative tasks, and it can allow Group members to work collectively (if required). Any/all group members can view, edit, add or remove sections of the ‘Doc’. (ibid:319)

While the above shows the ways in which FB can be useful in building an online CoP, a more careful analysis is required, in order to establish its usefulness as learning and teaching tool in Higher Education. Therefore, rest of this paper will draw upon theoretical debates and evidence from within the literature, so as to explain the ways in which FB could be a powerful tool ‘ one that could enhance learning and criticality amongst learners, and also boost their communicative confidence.

Why Facebook?

Created in 2004, by the end of 2013 FB was reported to have more than 1.23 billion monthly active users worldwide, and 24 million Britons logged on to the site each day (The Guardian, 2014). Due to its ease of use and availability in the form of mobile applications, FB has now become integral part of its users social lifestyle ‘ conventional estimates suggest that a typical user spends around 20 minutes a day on the site, and 2/3 of users log in at least once a day (Ellison et al. 2007). Since its creation, FB has been subjected to immense academic and scholarly scrutiny, especially for its uses within the educational settings. The initial literature largely focused on the negative aspects associated with its use, such as, identity presentation and lack of privacy (See Gross & Acquisti, 2005). It was argued that, amount of information FB users provide about themselves, (somewhat) open nature of the information, and the lack of privacy controls could put users at risk online and offline, for e.g. bullying, stalking and identity theft (Gross and Acquisti, 2005). However, constant changes made to the privacy settings have subsequently reversed these concerns. The users can control the release of information by changing the privacy settings. Issues surrounding student perceptions of lecturer presence and self-disclosure (Mazer, Murphy, & Simonds, 2007), and inconsistent patterns of use were also highlighted as potential causes of concern (Golder, Wilkinson, & Huberman, 2007). However, the positive effects of social networking tools in teaching and learning soon took precedence, as these computer-mediated communication modes are often seen as lowering barriers to interaction and encouraging communicative confidence amongst students. For instance, during a qualitative study at the Yale University, the members of staff praised FB for breaking the barriers between themselves and students, and it also encouraged students to feel part of the same academic community (mentioned in Bosch, 2009). Similarly, a study conducted by Ellison et al. (2007) explored maintained social capital, which assesses one’s ability to stay connected with members of a community. They concluded that FB usage amongst students is linked to psychological well-being, and it could especially be of benefit to students with lower self-esteem and low life satisfaction. It could also trigger a process, whereby goal attainment amongst students is significantly increased.

The above uses of FB in Higher Education and as a tool for enabling the maintenance of social capital, can be contrasted with its value as a learning environment. Selwyn (2009) has strongly cautioned against the use of FB for teaching and learning, as students might be reluctant to use it for learning purposes, shifting its focus away from being an academic tool and becoming considered purely as a site for socialisation and sharing mundane information. Selwyn presented an in-depth qualitative analysis of the FB ‘wall’ activity of nearly 1000 students in a British educational establishment, and his study offered a very pessimistic conclusion. He noted that students did not use this site for educational purposes and their interactions were limited to offering negative comments on learning/lecture/seminar experiences, casual comments about events, sharing factual about teaching and assessment requirements, seeking moral support for assessment or learning, and even boasting oneself as being academically incompetent and/or disengaged (2009:157). The evidence from this study suggests that, FB in Higher Education must be approached with severe caution and lecturers need to use it in a considered, strategic, logical and objective manner (ibid).

It is likely that FB could clash with traditional pedagogical models. Nevertheless, it can provide channels for informal and unstructured learning. For instance, Bugeja (2006:1) suggested that, social networking offers the opportunity to ‘re-engage’ individuals with learning, and promote ‘critical thinking’, which is one of the traditional objectives of education (explained further in subsequent paragraphs). Siemens (2005) connectivist approach also recognises these impacts of technology on learning and ways of knowing. According to him, learning in the digital age is no longer dependent on individual obtaining/storing/retrieving knowledge, but instead relies on the connected learning that occurs through interaction with various sources of knowledge and participation in communities of common interest, including social networks, and group tasks (Brindley et al. 2009). The shift of focus to group and network as the epicentre of learning relies on a concept of learning based on ‘exploration, connection, creation and evaluation within networks that connect people, digital artefacts and content’ (Manca and Ranieri, 2013:488). This type of learning through socialisation can foster student interest in the subject material. Duffy (2011) proposed that FB could be used for teaching and learning, as it enables students to share knowledge and information with the ‘Group’ members’ and the associations between them. Duffy (2011) further argued that FB provides a range of educational benefits by: Allowing students to demonstrate critical thinking, take creative risks, and make sophisticated use of language and digital literacy skills, and in doing so, the students acquire creative, critical, communicative, and collaborative skills that are useful in both educational and professional contexts. (p. 288). This in turn will also help to achieve the Abertay Graduate Attributes ‘ and encourage development of students’ intellectual and social capacity, give them tools to find creative solutions to real world problems, and work within a complex and interdisciplinary contexts. It could trigger intellectual, communicative and collaborative confidence amongst students, train them to take creative risks and help them broaden their knowledge base.

What is particularly fascinating about FB is the fact that it encourages a creation of Personal Learning Environment (PLE) ‘ which is an emerging pedagogical approach for both integrating formal and informal learning, supporting self-regulated learning, and empowering students intellectually (these values are also outlined in the Abertay Strategic Plan). According to Attwell (2010):

PLEs are made-up of a collection of loosely coupled tools, including Web 2.0 technologies, used for working, learning, reflection and collaboration with others. PLEs can be seen as the spaces in which people interact and communicate and whose ultimate result is learning and the development of collective know-how. A PLE can use social software for informal learning which is learner driven, problem-based and motivated by interest ‘ not as a process triggered by a single learning provider, but as a continuing activity.

PLEs are spaces for the modern learner to create, explore and communicate, and they are characterised as an approach to learning rather than a set of computer assisted applications (Dalsgaard 2006:2). The use of PLEs can help to reinforce classroom learning by extending communication outside of the classroom hours (but at the same time not creating classroom outside of the classroom), and thinking about topics beyond the weekly seminar sessions both individually and in collaboration with classmates through posting materials (like files, website links, notes etc.) and leaving comments. This type of engagement can result in the development of (informal) communities of learning. Whereas, collaborative learning can lead to deeper level learning, critical thinking, and shared understanding (Kreijns, Kirschner and Jochems, 2003). A study conducted by Churchill (2009) highlighted that ‘online-blogs’ can foster a learning community, and it makes learners feel as an important part of the classroom. The best is achieved from such blogs when they are designed to facilitate student access of course material, posting reflections on learning tasks and commenting on peer contribution. Taking into account that FB is one of the most popular network and method of community building, through which students today are communicating ‘ it can prove an useful tool in collaborative student-led learning (in prove equal or more beneficial than blogs). Downes (2007) argues that FB is distinctive when compared to other forms of computer-mediated communications because it has stronger roots in the academic community. One of the reports by the UK government body for technology in learning lists several potential uses of FB in education, and for developing communities of practice, communication skills, e-portfolios, and literacy ‘ all of which are essential aspects of the Abertay Graduate Attributes.

FB can be used not only to gain knowledge and information, but also to share information, as and when needed. McLoughlin and Lee (2007;2010) have pointed out that ‘learning on demand’ is becoming a type of lifestyle in modern society, and learners are constantly seeking information to solve a problem or to satisfy their curiosity. Learners should therefore not be considered as passive information consumers, but as active co-producers of content. This also makes learning highly independent, self-driven, informal and integral part of University life (ibid). Formal learning is described as highly structured (one that happens in classrooms), whereas informal learning happens through observation, listening to stories, communicating with others, asking questions, reflecting and seeking assistance. Informal learning rests primarily in hands of the learner and use of FB could allow learners to create and maintain a learning space that facilitates self-learning activities and connections with classmates and other academic/educational networks (ibid). However, informal learning outside of the classroom must be considered as a continuum, rather than either/or dichotomy (Attwell, 2007). The informal learning can be used to supplement formal learning (not substitute) and PLE as a pedagogical tool should be viewed as intentioned merger of formal and informal learning spaces.

PLEs are increasingly becoming effective in addressing issues of learner control and personalization that are often absent in the University Learning Management Systems, such as, Virtual Learning Environment (VLE) or Blackboard ( Dabbagh and Kitsantas, 2011). VLEs do not accommodate social connectivity tools and personal profile spaces, and they tend to replicate traditional models of learning and teaching in online environments. They create a classroom outside of the classroom, which may explain as to why educators ‘can’t ‘ stop lecturing online’ (Sheely, 2006). Also, VLEs are largely considered as tutor dissemination tools (for lecture notes, readings and assessment related information), over student learning tools. The University faculty and administrators control VLEs, and learners cannot maintain learning space that facilitates their own learning activities, and connection with and fellow classmates (Dabbagh and Kitsantas, 2011:2). When FB is employed as a learning tool, it moves away from this very hierarchical form of learning and empowers students through designs that focus on collaboration, connections and social interactions. It is much more dynamic and evolved in this sense.

It has been long argued that VLEs have had only a relatively slight impact on pedagogy in higher education, despite their commercial success (Brown, 2010). However, FB has the potential not only to fundamentally change the nature of learning and teaching but, through the creation of learner-controlled devices, it may challenge the role of traditional institutions in a way that previous technologies could not. Brown (2010:8) imposes a crucial question regarding VLE (such as Blackboard), and that it is ‘reasonable to wonder how much longer the return on investment will stand up to scrutiny’ (Brown 2010:8).

Conclusion

FB is increasingly becoming a popular learning platform that has a true potential in HE. A FB ‘Group’ can facilitate learning, by increased interaction between students and staff. The research has so far (despite being plausible in nature) indicated FB can be used to enhance the literacy, critical thinking, and collaborative and communicative skills amongst students. Some researchers have argued that social networking sites, such as, FB could offer ‘the capacity to radically change the educational system’ to better motivate students as engaged learners rather than learners who are primarily passive observers of the educational process’ (Ziegler 2007, 69). However, this overly-optimistic view is strongly contested by others, who have raised grave concerns about heightened disengagement, alienation and disconnection of students from education and to the detrimental effect that FB may have on ‘traditional’ skills and literacies (Brabazon, 2007). Academics have feared that FB could lead to intellectual and scholarly ‘de-powering’ of students, incapable of independent critical thought. According to Ziegler (2007:69), sites such as FB could lead to ‘the mis-education of Generation M’ (cited in Selwyn, 2009), and despite of its popularity as innovative educational tool, studies have indicated that it may distract learners from their studies and purely become a tool for socialisation (ibid). The use of FB remains controversial and further research is needed in this area to establish its effectiveness in HE teaching and learning.

Causes of drug failure: essay help online

One of the most common causes of drug failure is drug-induced liver injuries (DILIs). The majority of these failures are idiosyncratic reactions, which occur in small patient populations (between 1 in 1.000-10.000) in an unpredictable manner.1 The underlying mechanism of this type of DILI is very complex and still not completely understood.2 However, recent data have suggested that the crosstalk between cytokine-mediated pro-apoptotic signalling and drug reactive metabolite-mediated intracellular stress responses is essential in the comprehension of DILI.3

Various xenobiotics (e.g. diclofenac) can induce liver damage via the tumor necrosis factor ?? (TNF??) pathway. Excretion of this major cytokine will initiate through liver macrophages (Kuppfer cells) after exposure to bacterial endotoxins (e.g. Lipopolysaccharide).4 After binding of TNF?? to its receptor (TNFR1), the transcription factor nuclear factor kappa-B (NF-??B) is activated.5 In general, NF-??B is detained in the cytoplasm by binding to an inhibitor of ??B (I??B) complex. The initiated NF-??B leads to activation of I??B kinase (IKK), which eventually leads to the ubiquitination and phosphorylation of the I??B complex.6 Subsequently, this complex is targeted for proteosomal degradation. Hereafter, NF-??B translocates to the nucleus in an oscillatory way and activates the transcription of several genes which primarily encode survival proteins, such as cellular FLICE-like inhibitory protein (c-FLIP), inhibitor of apoptosis proteins (IAPs) and negative regulators proteins (e.g. A20, I??B??).7 After protein synthesis, A20 and I??B?? will inhibit the function of NF-??B in a negative feedback manner (Figure 1). Modified TNF??-induced NF??B translocation by various compounds is believed to shift the balance between cell survival and cell death.

Furthermore, reactive compound metabolites are capable of altering cellular molecules, which could lead to intracellular disturbances and eventually to the induction of various stress response or toxicity pathways.8 These pathways, combined with a decreased response for cell damage recovery and protection, will enhance the susceptibility to cell death of various cells. Up to now, insufficient studies have been performed to investigate the contribution of various pathways to DILI. It still remains uncertain which drug-induced toxicity pathways modulate the pro-apoptotic activity of TNF?? signaling in DILI reactions. However, there are different stress responses which are most likely involved in the formation of DILI. The Kelch-like ECH-associated protein 1 (Keap1)/nuclear factor-erythroid 2 (NF-E2)-related factor 2 (Nrf2) antioxidant response pathway and the endoplasmic reticulum (ER) stress-mediated unfolded protein response (UPR) have been studied in drug-induced toxicity of hepatocytes [2]. The Keap1/Nrf2 pathway is essential in recognizing ROS and/or cellular oxidative stress [6]. Keap1 maintains Nrf2 in the cytoplasm and guides it toward proteasomal degradation under normal circumstances. Nrf2 signaling is important in the cytoprotective response against ROS, but its role in the TNF??/drug interaction in idiosyncratic DILI remains unclear.

Furthermore, the ER stress-mediated UPR is a stress response due to enhanced translation and/or disturbed protein folding. Should the modification fail, a pro-apoptotic system will be initiated to eliminate the injured cell. The exact mechanism and role of the ER stress signalling response in managing DILI in relation to TNF??-induced apoptosis still remains unclear.

In this research, we hypothesize that stress response mechanisms (e.g. ER stress responses, oxidative stress responses) are involved in the delay of NF-??B nuclear translocation upon exposure to various NF-??B nuclear translocation compounds.

In this project, a human HepG2 cell line will be used to study the interaction between five different compounds (amiodarone, carbamazepine, diclofenac, nefazodone, ximelagatran) and cytokine TNF alpha. To investigate the overall percentage of cell death, a lactate dehydrogenase (LDH) assay will be performed. Furthermore, in order to quantify the amount of apoptotic cells, an Annexin V affinity assay will be executed. It is expected that the concentration-dependant toxicity of the compounds is enhanced with the presence of TNF??. Live cell imaging with HepG2 GFPp65 cells will be used to follow the NF-??B translocation after exposure to the five various compounds. Subsequently, an automated image quantification of the p65 signal intensity ratio of nucleus/cytoplasm is measured to show the exact onset of the second nuclear entry of NF-??B. It is estimated that the data of the NF-??B translocation will show a compound-induced delayed onset of NF-??B.

The activation of NF-??B target genes cIAP and c-FLIP will be measured using a Western Blot analysis. Moreover, the negative regulators of NF-??B, A20 and I??B??, will be studied to investigate the negative feedback loop of NF-??B. We anticipate that the data of the Western Blot analysis will show a decrease in production of the investigated target genes, because of the reduced TNF??-induced NF-??B transcriptional activity.

Ultimately, a data analysis will be applied on the results using t-test or two-way analysis of variance (ANOVA) in case of multiple comparisons.

Karma by Kushwant Singh

The text ‘Karma’ is written by Kushwant Singh in 1950 who is a English novelist.

The short story is 65 years old today but it is still relevant today, many of the issue that the text show.

The story deal with problems of the Indian cultures. Novel tells us the impact the empire have had on India, and shows us that the British norms have had influence on India.

It shows us that there is a big clash between women and men in India, and the way that men looks at women, but also the clash between rich and poor, is very big, in the story men and women does not sit in the same side of the train.

The text take place in a train.

And we have main character who names is Sir Mohan Lal, he is an Indian Man, and he think self he is very handsome and beautiful like the English men. He actually think of himself as an Englishman.

He think he is better than he Indians.

He despratly trys to fit in with the Englishmen.

Sir Mohan is very well iducated his job is a vizeier and barrister, he has been in England to stody, and maybe that is the reason that he thinks of himself as an Englishman. He think he is a good looking man, a time in the story he looks in the mirror ‘Distinguished, efficient – even handsome. That neatly trimmed moustache – the suit from Savile Row, the carnation in the buttonhole.’ It shows that he is proud of himself, and he knows which image he want to send to other people, but also that he only speaks to himself.

Sir Mohan Lal obsessed with how other people think of him. He will do anything to get to know an Englishman. In the train he meets many Englishmen and he always have an old copy of ‘The Times’ which shows how desperately he want to get in touch with an Englishman. And also that he think he as a well education, and also to show that he is a man of manners and English culture. He feels like he is an English man and not an India, he think that Indian people is poor, and not like him. He will not being seen with some of them, and also his wife would he not been seen with.

In the short story we also meet his wife, which is an Indian women, he doesn’t love her and think she is ugly, the only reason he is married to her is because he want to have children.

This shows us the problematic we have reading in the class, were many married has been arranged, and that the people there is married doesn’t love each other. Sir Mohan Lal makes her travel in the zenana(a section in the train only for women).

In the train Sir Mohan Lal meet two English soldiers, who he wants to travel and talk to them, that he tell the guard that they could sit in his coupe. Mohan should never had does that. The men were not looking for an Indian man to talk with, and they sees themselves as better than Sir Mohan Lal. Just like he had done before with the Indians people, then he could see how it feels, to not be an excepted person.

Karma is when something you have done comes back to you and it certainly does.

Human Resource Management and Employee Commitment: essay help

The concept of employment commitment lies at the heart of any analysis of Human Resource Management. Really, the rationale for introducing Human Resource Management policies is to increase levels of commitment so positive outcome can result. Such is the importance of this construct. Yet, despite many studies on commitment, very little is understand of what managers mean by the term ‘commitment’ when they evaluate someone’s performance and motivation. Development of organizational commitment is basically by major theoretical approaches emerge from previous research on commitment: Firstly, commitment is view as an attitude of attachment to the organization, which leads to particular job-related behaviors. The committed employee, for example, is less often absent, and is less likely to leave the organization voluntarily, than are less committed employees.

Secondly, one line of research in organizations focuses on the implications of certain types of behaviors on subsequent attitudes. A typical finding is that employees who freely choose to behave in a certain way, and who find their decision difficult to change, becomes committed to the chosen behavior and develop attitudes consistent with their choice. One approach emphasizes the influence of commitment attitudes on behaviors, whereas the other emphasizes the influence of committing behaviors on attitude. Although the ‘commitment attitude behavior’ and ‘committing behavior attitude’ approaches emerge from different theoretical orientations, and have generated separate research traditions, understanding the commitment process is facilitated by viewing these two approaches as, inherently, inter-related. Further by virtue of commitment the human recourse management department can fully utilized their talent, skill, and efficiency of the employee in productive way to fulfill the personal goals of the employees and organizational goals. More over commitment helps in fulfilling the purpose of training imparted to the employees because in spite of increasing the level of skill through training without commitment these cannot be maintained. After all existence of adequate commitment amongst employees create an work culture environment and there by all employees can be motivated and encourage towards the excellent performance of their duties.

3.5 Social Support ‘ its Concept, Purpose, Types, Relations with Social Network and social Integration

3.5.1 Concept of Social support

The concept of social support is strategic which defined as the belief that one is cared for and loved, esteemed and valued. It is a strategic concept in not only giving understanding to the maintenance of health and the development of (mental and somatic) health problems, but also their prevention. Types and sources of social support can vary. Four main categories of social support are (i) emotional, (ii) appraisal, (iii) informational and (iv) instrumental support. Social support is closely related to the concept of social network, the ties to family, friends, neighbors, colleagues, and others of significance to a person. Within this context, social support is the potential of the network to provide help.

It is important for organizations to collect information on social support in the employees, to enable both risk assessment and the planning of preventive interventions at different level such as:

a) Lack of social support increases the risk for Organizational Commitment:

Lack of social support is shown to increase the risk of both mental and somatic disorders, and seems to be especially important in stressful life situations. Poor social support is also associated with enhanced mortality. Social support may affect health through different pathways i.e. behavioral, psychological and physiological pathways.

b) Social support is determined by individual and environmental factors:

Social support is determined by factors at both the individual as well as the social level. Social support in adulthood may be to some extent genetically determined. Personality factors that might be associated with perceived social support are interpersonal trust and social fear. The position of a person within the social structure, which is determined by factors such as marital status, family size and age, will influence the probability of them receiving social support. The occurrence of social support depends on opportunities that an organization creates to commitment with employees.

c) Preventive interventions stimulate social support at different levels:

There are three types of preventive interventions aimed at stimulating social support: universal, selective or indicated interventions. The ultimate goal of universal interventions is to promote health. They are aim at providing social support at the group or community level. Selective preventions aim to strengthen social skills and coping abilities with, for example social skill training. Social support groups and self-help groups are other examples of selective prevention programs. Indicated prevention programmes aim to reduce the risk of people who already have symptoms of psychological stress, developing a mental disorder.

Social support is defining as help in difficult life situations. Social support is a concept that is generally understands in a spontaneous sense, as the help from other people in a difficult life situation. It is social support as ‘the individual belief that one is cared for and loved, esteemed and valued, and belongs to a network of communication and mutual obligations’. In spite of these widely accepted definitions of social support, there are very few consensuses in the literature about the definition and consequently the operation implementation of the concept. There is a need for further research, especially about what kind of support is most important for organizational commitment. Researcher tried to the applied social support score is the sum of the raw scores for each of the items. In the Guwahati Metro region, the sum-score of the Social Support Scale ranges. A score is classified as poor support, intermediate support and strong support.

3.5.2 Purpose of Social Support

Researcher thinks that in defining social support the qualities of support perceived (satisfaction) and provided social support for the managerial employees are significant here. Most of studies are constructed on the measurement of subjectively perceived support, whereas others aim at measuring social support in a more objective sense. One could also distinguish between the support received, and the expectations when in need, and between event specific support and general support. The definition in terms of a subjective feeling of support raises the question whether social support reflects a personality trait, rather than the actual social environment (Pierce et al., 1997). Most researchers will agree that the person as well as the situation affects perceived social support, and that the concept deals with the interaction between individual and social variables. In the present study researcher has tried to observe percentage of male and female managerial employees with poor support, intermediate support, and strong support in Public and private organizations of Guwahati Metro region.

3.5.3 Types of Social Support

Types and sources of social support may vary. Mainly four major categories of social support such as emotional, appraisal, informational and instrumental are in the use of research work. Researcher tried to observe it in her study.

a) Emotional support generally comes from family and close friends and is the most commonly recognized form of social support. It includes empathy, concern, caring, love and trust.

b) Appraisal support involves transmission of information in the form of affirmation, feedback and social comparison. This information is often evaluative and can come from family, friends, coworkers, or community sources.

c) Informational support includes advice, suggestions, or directives that assist the person respond to personal or situational demands.

d) Instrumental support is the most concrete direct form of social support, encompassing help in the form of money, time, in-kind assistance, and other explicit interventions on the person’s behalf.

3.5.4 Social Support & Concept of a Social Network

Social support is closely related to the concept of a social network, or the ties to family, friends, neighbors, colleagues, and others of significance to the person. However, when the social network is described in structural terms, like size, range, density, proximity and homogeneity, social support normally refers to the qualitative aspects of the social network within this context, social support is the potential of the network to provide help in situations when needed. However, the social network may also be the cause of psychological problems.

Halle and Wellman present the interplay between social support, the social network, and psychological health in a model: The social network as a mediating construct. This model shows that social support can be seen as resulting from certain characteristics of the social network, which are in turn caused by environmental and personal factors. The model suggests that it is important to distinguish between the structural and quantitative aspects of the social network on the one side, and social support on the other. In this study researcher has correlated stress and social support with organizational commitment taking in to consideration managerial employees of Public and private sector in Guwahati Metro region.

3.5.5 Social integration and Social Support

Whereas the concept of social support mainly refers to the individual and group level, the concept of social integration can refer to the community level. A well-integrated community refers to well developed supportive relationships between people in the community, with everybody feeling accepted and included. A related concept is social capital, which is often used as the sum of supportive relationships in the community. Social capital may, however, also be used in a somewhat different meaning, such as solidarity’. It is an important for the development of organizational commitment.

In the fields of Organizational Behavior and Industrial/Organizational Psychology is, in a general sense, the employee’s psychological attachment to the organization. It can be contrasted with other work-related attitudes, such as job satisfaction, defined as an employee’s feelings about their job, and organizational identification, defined as the degree to which an employee experiences a ‘sense of oneness’ with their organization. Nobel laureate Amartya Sen said that the sense of oneness in every individual should he ‘dynamic’ and not confined within the narrowness of a single identity. People have to judge contextually as to what oneness means in several aspects of our life. A person cannot have just one identity of oneness based on one’s nationality or religion.

Encompass the systematic study and careful application of knowledge about how people act within organizations. Organizational studies sometimes are considered a sister field for, or overarching designation that includes the disciplines like industrial and organizational psychology, organizational behavior, human resources, and management.

However, there is no universally accepted classification system for such subfields. Beyond this general sense, organizational scientists have developed many feelings especially in creative expression of organizational commitment; the present study is combination of the higher level employees stress and social support, which effects on organizational commitment. Researcher have selected Guwahati Metro region for their study. The study is design based on types of organizations i.e. Public and private organizations.

Climate Effect On Building facade: essay help

Abstract : Building facade is one of an important element of the architecture. It have a significant effect on energy conservation and the comfort of the building users. The facade is affected by the environmental conditions and it designs should take into consideration the climate of it regions this research will explain the facade treatment on different region, also the Basic methods for designing high-performance building facade it will explain two case studies that illustrate facade design methods for two different climate conditions.

Content

1. Introduction ”””””””””””””….. 3

2. Literature Review””””””””””””’ 4

3. Research discussion and data analysis””””””””’ 5

3.1. Design criteria For Mixed Climate”””””””’.. 5

3.2. Design criteria For Hot Climates””””””””.8

4. Conclusion””””””””””’..”””11

5. References”””””””””””””’11

1. Introduction

Climate is always affect our daily life ether if it’s sunny ,cloudy ,rainy it have an Influences on our sense of comfortable when we go outside the building. When we are inside the building, the building separate use from the outer environment and. It have it own conditions depend on the technology inside the building such as , HVAC systems which allows us to change the temperature or humidity’etc . Building protects us from the Weather problems that are not favored to stay out in it. Building interior spaces conditions also depends on the exterior facade treatment For example the interior heat or lighting that comes through the glazed windows will affect the temperature of the interior.

This research will explain the influence of the climate on the building facade , what is the main factor that affect the facade of the architecture on the other hand ,the techniques of treatment the facade to provide a suitable interior environment for it users in cretin climate condition, also how can we design the facade in simple way to fit with the changing in the climate , and facade materials selection to help in adaptation the building to the climate conditions.

2. Literature Review

Across the history Human used the shelter to protect them from danger Such as wailed animals and Climatic conditions. Later on with the evolution of human the dwellings has developed after it was a cave in Mount it became a building in various forms and functions. Buildings provide the foundation for our daily activities, for example ,educational ,commercial , Health care ‘. Etc.

Climate is generally the weather conditions of a region, as temperature, air pressure, humidity, precipitation, sunshine, cloudiness, and winds, throughout the year, averaged over a series of years (n.d, The American Heritage?? New Dictionary of Cultural Literacy). Every region have it own climatic characteristics that can affect the architecture facade differently, for example In warm areas like middle east region, people avoid the glare and the heat of the sun, as demonstrated by the decreasing size of the windows. On the other hand in north Europe they use glass in Exaggeration way to allow the sun light to inter the building and heat the interior space because of the cold weather of their region (””””” 2010).

3. Research Discussion And Data Analysis

facade is generally one exterior side of a building, usually but not always, the front side of the building(n.d, 2011). The building facade acts as a skin that wraps around the building and affects the internal environment as it interacts with the external one. Building facades is not only about the aesthetic of the building, it’s also perform as the barriers that separate a building’s interior from the external environment. facades are one of the most Important contributors to the energy consumptions and the comfort norms of any building. facade designs and performance are one of the main factors for sustainable, energy-efficient, and high performance buildings. A facade should satisfy the design as well as the functional requirements .The Climate of the area plays a major role in designing the facade, different design strategies are required for different climatic zones. One of the traditional way to deal with the climate in the Middle East the use of small opining and Mashrabia or (Roshan) to cover the windows. this techniques that characterized the facade in this region were use to prevent the heat to enter the building and to Imprisonment the cool inside the building, also to filter the air from the dust associated with it (Mady, 2010).

3.1. Design Criteria For Mixed Climate

the Center for Urban Waters is a Public laboratory building, in Tacoma, Wash. A Tacoma is in a region with a mixed marine climate. Designed by Perkins+Will and got LEED Platinum award.

Figure 1 shows average daily temperatures and the solar radiation for each month.

This temperature of this climate zone allows cooling by natural ventilation, and the quite soft winters with low solar radiation .This climate conditions using a reasonable amount of glazing on the south and west orientations will not have a negative affect a building’s energy performance.

This view of the building is the west and south facade. It shows the differ??ent treatments for different building sides.

– The west facade consists of an aluminum cladded rain screen system, with integration of win??dows that some of it operable and non operable, and exterior blinds.

– The south facade consists of a curtain wall of fritted glass and external hori-zontal shading devices.

It is located in industrial waterfront on a long narrow site. The building program element located according it’s possible needs of air and natural ventilation. The waterside of the building provides a fresh cold air which is idle foe ventilation, so the designer placed offices on the waterway to provide a good ventilation. On the road and industrial side the opportunities of fresh airs is reduced so the designer placed the laboratories on this side because of it need of mechanical ventilation.

The shading strategies used based on the facade orientation. The western orientation of the building receives the greatest solar heat gain so it designed with a low window to wall ratio, vertical Shading devices used to moderate the solar heat gain and glare from low afternoon sun. the south facade consist of a curtain wall that provide clear views to the waterside, while horizontal shading devices obstruct the solar heat gain. The north facade mainly consists of solid elements and minimum amounts of glass. This design approach improves thermal resistance , limiting the heat transmit from exterior to interior environment. The rain screen on the east facade are made of horizontal corrugated metal panels faces the industrial side. It covered the upper half of the 2nd and 3rd level with small win??dows opining on the corrugated metal screens. These aluminum screens help to manage the early morning sun and reduce it poten-tial glare, on the other hand maintaining of the exterior views and maximizing natural day lighting of the interior spaces. It uses natural ventilation to decrease the building’s energy loads, also control the amount of natural ventilation through the Operable windows.

In summary the center for urban water designed consist of many sustainable elements not only in the facade but also in the roof system sewage and mechanical system , see building section on (Figure 3).These sustainable systems will rise the building performance and suitable the real-time energy use(Aksamija, 2014).

3.2. Design Criteria For Hot Climates

The University of Texas at Dallas. It’s a Student Services Building located Texas ,USA. It’s in a hot climate region. Designed by Perkins+Will and got LEED Platinum award.

Figure 4 shows annual average daily temperatures in rela??tion to thermal comfort zone and the available solar radiation.

In designing the facade of this building, the main con??cern was the hot climate conditions, because In this region the climate is usually hot and sunny at the summer session ,while the other sea??sons are relatively mild.

The longer sides of the rectangular form building is facing north and south orientations. All sides of the build??ing are covered by a curtain wall. Add to that the shading devices which supported by the curtain wall are wrapping the east, west, south, and small part of north facade. The shading system consists of horizontal terra-cotta louvers and vertical stainless steel rods (Figure5). The shading devices are distributed around the building creating an asymmetrical pat??tern over the building facades however, the terra-cotta shading element is important for reduc??ing solar heat gain in summer hot climate.

In the interior of the Building there are three internal atriums pro??vide daylight to interior spaces (Figure6).

The location of the lobby is on the east side of the building in one of the atriums, it provide natural day light and limit the gaining of the heat.

This design strategy is suitable for hot climates regions, especially when reducing solar heat gain while providing a natural daylight for the interior spaces. The arrangement of shad??ing devices along the facade and internal atriums is an ideal for providing a natural daylight. Almost all of the spaces in the Building have views to the outside. The building also contains other sustainable design strategies which improves the energy efficiency and the comforts interior spaces (Aksamija, 2014).

4. Conclusion

Design the facade is important because it’s the connection between building exterior and interior. Architect has to take in consideration the building’s location and climate to make a high performance facades and to provide a sustainable and com-fortable spaces for building occu??pants, also significantly reducing a building’s energy consumption. Strategies differentiate from each other depending on the geographical and climatic regions, so criteria that work best in hot climates are different from those in hot and humid or cold regions. Architect should know the characteristics of each climatic condition and location as well as the program and function requirements to create a sustainable facade fit to it environment.

Online Behavioral Advertising (OBA)

In order to understand where online privacy concerns of consumers origins from it first need to be noted what OBA is and what is the main mechanism behind it. It is of great importance to note that this main mechanism behind OBA are cookies. These cookies in accordance cause privacy concerns among consumers.

1.1 Online behavioral advertising

Online advertising is the provision of content and service for free, from the website publishers to the website visitors. In this case advertisements are aimed at everyone visiting their websites (networkadvertising.org, 2012). However, there is a type of online advertising specifically aimed at providing tailored advertisement content to a specific customer. This type of advertisement is known as Online Behavioral Advertising. Online behavioral advertising is the practice of gathering information regarding someone’s activities online. This data is used in order to determine which form and content to display to the web site visitor (McDonald & Cranor, 2009). This practice provides advertisements on the websites the individual visits and make them with the collection of their content relevant to their specific interests (Leon et al., 2012). When they consequently visit a website which correlates with their specific interests, suiting advertisement will be provided.

Consumers can control OBA by the application of tools, including those concerned with self-regulatory programs. If these tools are applied appropriately, the consumer could reach more control of self-disclosure. Tools to control OBA are for instance op-outs tools, built-in browser settings, blocking tools (Leon et al., 2011). Tools such as Do Not Track headers to websites show a message that the website visitor does not prefer to be tracked. Opt-out tools on other side, create the ability for the user to set opt-out cookies for multiple advertising networks. The issue that arises with the latter case is that if a consumer chooses to opt-out, the network of the establisher will discontinue to show customized advertising but on the other hand will keep tracking and profiling the website visitor (Leon et al., 2011). The continuation of tracking and profiling website visitors has caused considerate privacy concerns among consumers. This situation shows high correlation with the case of NPO. NPO didn’t make the consumer aware of an opt-out option even before using an opt-out option, which is expected to create even more privacy concerns (B. Comb??e, 2013).

1.2 Cookies

The most important feature of OBA is the utilization of cookies. Third-party HTTP cookies are the main mechanism used for online tracking. In comparison to first party cookies, which are located by the domain the website user is visiting. Third party cookies are visited by a different domain such as an advertising network. Other cookies such as flash cookies and HTML 5 (local storage) continue to stay on the user ‘s PC even if the website visitor deleted cookies or change browsers (B. Krishnamurthy and C. Wills, 2009;M. Ayenson et al., 2011 and M. Dahlen and S. Rosengren, 2005).

Cookies are directly linked to OBA because as earlier explained OBA uses third-party cookies to provide customized advertisements. A cookie is a small document of signs in the form of numbers and letters. For example: lghinbgiyt7695nb. The computer provides the cookie an unique code. These signs are downloaded on an individuals’ web browser when they access most websites (Zuiderveen Borgesius, 2011). Cookies enable websites to notice them whenever they return back to a website. Only the server that sent the cookie can read and therefore utilize that cookie. These cookies are vital in order to offer a more customized experience. (youronlinechoices.com, 2015).

1.2.1 Types of Cookies

There are different types of cookies. The most important cookies relevant to this research are discussed. The selection of cookies are derived from the cookies used by NPO. There are 2 different categories of cookies. First party cookies are cookies which make sure the website functions optimally. The behavior of the website visitor is tracked within one website, the website the consumer visits. Third party cookies on the other hand, are placed by third parties, in order for the website to be analyzed by google analytics. This type of cookie makes sure the website visitor will receive customized advertisements (Zuiderveen Borgesius, 2011 ).

First party cookies (npo.nl, 2015):

‘ Functional cookies: Cookies that make the website functioning as it should. These cookies keep track of the web site visitors’ preferences and memorize the individual previously visited the website.

Third party cookies (npo.nl, 2015):

‘ Analytics: Cookies to measure utilization of website.

‘ Social media: Cookies to share the content of the NPO website through social media. The video’s and articles opened on the website can be shared through buttons. To make these buttons function, social media cookies are used by different social media parties. This in order for them to recognize the website visitor whenever it wants to share an article or video.

‘ Advertisement cookies: Cookies to show Star- adverts. These advertisements are placed by the website owner or third parties on the website of the website owner.

‘ Recommendations: Cookies to make more suitable recommendations. The NPO wants to make suggestions to website visitors on other program’s for consumers to watch online.

The main information these cookies store are:

‘ Keeping track of visitors on their webpages

‘ Keeping track of time it spends on its visit

‘ What are areas the website should take notice of in order to improve

‘ Keeping track of the order of visits of different webpages within the website

If this information is gathered, this data can be added to the existing profile information. In time third parties will be able to create a personal profile of the consumer, even though there is no name attached to it. Today third-party tracking is subject to privacy debates (Zuiderveen Borgesius, 2011 ). Consumers can feel invaded in their privacy if they suspect digital marketers from creating a personal profile, by gathered information from consumers visiting websites. Third party tracking and consumer privacy get a significant amount of attention from the government and consumer protection (Zuiderveen Borgesius, 2011 )

1.2.2 Cookie use by marketers

Since the law is updated continuously on privacy regulations and there is no uniform law concerning privacy of consumers marketers are recommended to weigh out the benefits of using practices that are not 100% conform privacy regulations against the financial and risks on their reputation that comes along with this consideration. (Chaffey & Ellis-Chadwick, 2012; Zuiderveen Borgesius, 2011)The organization must inform the website visitors properly the reasons and the procedure of data collection. The marketers’ website needs to provide its visitors with information on how they will make use of a website visitors’ data . Next to that, the consumer has to give consent for the utilization of consumer data. The figure below, indicates the issues that should get considerate attention when a data subject is informed by how his/her data will be utilized. These issues are described below the figure.

Figure 1. Information flows that need to be understood for compliance with data protection legislation.

Source: D. Chaffey and F. Ellis-Chadwick, Digital Marketing, 2012, p. 163

‘ Whether the consumer will receive future communications

‘ Whether the data will be passed on to third parties with consent explicitly required. Referring to section 2.1 on privacy and the recommendation section, on privacy issues regarding NPO, it can be obtained that the NPO didn’t comply with explicit ‘consent’ from the website visitor which caused their bad publicity.

‘ The length of data storage. Referring to the models in section 2.3 confidence, knowledge and control are major indicators on consumer behavior regarding OBA.

According to marketingsherpa.com (2011) A business making use of OBA has to know whether it properly understands its application. It is important to adopt an ‘cookie audit’. A cookie audit is the principle of understanding the types of third-party tracking systems that are available and which are located on the browser of consumers when they visit the company’s website. This is important since third-party tracking can cause deceleration on a company’s website. Next to that, information obtained from customers can leak out to even unknown companies.

Furthermore, it is important to clearly give website visitors the option to opt out and to provide them with information on any form of tracking. First the website visitor needs to be aware where the website is about. Secondly the consumer need to be provided with information about the substance of the ads. Last the website visitor should get the ability to learn more about how to opt-out.

An opt out means a company will discontinue collecting and utilizing information from different web domains for the aim of providing personalized based advertising from data gathering using third party cookies in OBA. However it should be noted to the website visitor that opting out does not specifically mean they will cease receiving online advertising. The website visitor will continue to receive advertisements but not tailored to their specific preferences. (networkadvertising.org, 2012; youronlinechoices.com, 2009). Some companies make use of flash cookies. These cookies make regular cookies come to life again after the website visitor has deleted the cookies. The new cookie will get the same code as the web site visitor has removed (Soltani, 2009).

In addition it is of great importance to give website visitors the control of their data. 67% of the website visitors entrust transparent brands more. This confidence makes the chance of purchase 36% more likely than if a brand is not transparent. Companies that do not obey regulations regarding privacy also showed decreases in turnover. (Brown, 2009). Furthermore it is important to take measures for website visitors to manage cookie tracking and privacy. The website visitor should very easily know what the purpose if of the data obtained from them. As earlier explained they should also have the quick option to opt-out. (marketingsherpa.com, 2011)

1.2.3 Drawbacks cookie use

Netscape Navigator, the first successfully implemented web browser, introduced cookies. Version 1.0 of the web browser was introduced in 1994. In Netscape 1.0 cookies where introduced. (Turnbull, 2013). Even though the cookies are introduced almost 20 years ago, until recently two thirds of the samples used in research are not even able to explain what a cookie actually is. Even up to now customers believe more data is collected from them than is the case. Next to that consumers do not understand who are involved and how these companies are involved in OBA. Neither there is a understanding of technologies present (Ur et al 2012).

Next to that, the majority of web users don’t know about opt out cookies. Even nowadays the perception still exists it can be done through turning to their web browsers or delete cookies.(Ur et al., 2012). However if the website visitors are aware that if they have the ability to opt out and gain more knowledge on privacy matters, visitors feel more positive about the application of OBA by businesses (McDonald & Cranor , 2008) . If consumers do not understand their rights on privacy, they are pre-biased on this matter. This issue will be discussed further in chapter 2. If organizations easily and properly inform website visitors on their privacy rights they can possibly break through this pre-assumption. (McDonald& Cranor, 2008 and 2009)

In addition, the icon for opt-out options demonstrated in section 2.1, is subject to discussion whether the aim of this icon is reached. According to critics the meaning of this icon is not known by consumers, therefore opt-out possibilities are perceived as difficult. (‘Volg-me-niet register is wassen neus’, 2011).

Furthermore, according to marketingsherpa.com (2011) consumers should be better informed about opt-out opportunities in order to take away uncertainty of privacy matters. The privacy issues that are involved as partly discussed above will be further analyzed in chapter 2 and with the assistance of models the effects of privacy matters on consumer behavior are analyzed.

Besides, consumers complain they find privacy important but ease of use as equally important. They are annoyed by the question they are asked continuously regarding accepting the use of cookies (B. Comb??e, 2013). Next to that consumers complain about websites which place a cookie wall which makes it only possible to enter the website if the use of cookies is agreed upon.

2. How do consumers react to current privacy concerns in OBA?

2.1 Privacy

Privacy is defined as a moral right of having the possibility to prevent intrusion into someone’s personal information. Nowadays, privacy is of high importance to consumers with increasing technology increasing possibilities to more enhanced practices in identity theft, such as hacking or just invasion of consumers’ online privacy practices. By gathering personal information of consumers with the use of earlier explained cookies, the degree of customization can highly increase. (Chaffey & Ellis-Chadwick, 2012)

2.1.1 Root of privacy concerns online

In Europe the legal framework concerned with online behavioral tracking is regulated by the European Data Protection Directive. These regulations enclose gathering, processing, filing and transmission of personal information. Next to that the European e-Privacy Directive mainly regulates privacy of data and the use of cookies. This regulation made third parties placing cookies apply a regulation to give website visitors the ability to opt-out. This gave web site visitors the chance to reject cookies. Consequently, websites provided information on how to opt-out or reject cookies.

J. Zuiderveen (2011) did research on to what extent practice is complying with data protection directives on ‘permission’: a willingly, specific, based on information volition. Research has shown that the processing of personal data cannot be based on article 7.b data protection directives: there should be a positive agreement. There is no form of agreement if consumers are not aware of exchanging personal information in turn for a service. Next to that collection of personal information can neither be justified by article 7.f which states that the interests of third parties are important, unless the privacy of the concerned is invaded. Privacy interests also means that the right on privacy is a significantly important right. By following online behavior of web site visitors, Dutch companies cannot refer to these 2 articles. However in 2011 article 2.h came to attention which states that with unambiguous permission the website is not allowed to make to quick assumptions that the website user give permission to make use of personal information (European commission, 2003; 2006). This latter was specifically the case with NPO as described in the introduction. They explicitly did not asked for permission before collecting data.

Even though policies on cookies are changing continuously, it is important to describe how consumers are up dated on getting more insight into their privacy rights and consequently what effect the extent of privacy has on consumer behavior discussed with models in chapter 2.3.

Components consumer update on privacy (iab.net, 2015):

‘ Advertising option Icon : This icon will represent that the form of advertising is supported by a self-regulatory program. If the consumer clicks on this icon it will be provided with a disclosure statement concerning data gathering and where the information is used for and a simple opt-out system.

‘ Consumer choice mechanism: At AboutAds.info consumers are provide with information on how to opt out.

‘ Accountability and enforcement: Since 2011, DMA (Direct marketing association) and CBBB employed technologies to provide website visitors with information on a company’s transparency and control purveyance.

‘ Educational programs: Businesses and consumers will be educated on opt-out options and thus self-regulatory systems.

For now self-regulatory systems are opt-outs with the future possibilities of opt-ins. These mentioned components above all provide consumers with more information on opt-out possibilities. According to privacy concerns this self-regulatory systems proofs that consumers should be educated about opt-out options. Privacy regarding personal information using cookies needs considerate attention. Previous research has shown that if consumers have the perception their privacy is invaded they consider it as invasive and obstructive. Therefore, it is important for companies to be transparent. (Goldfarb & Tucker 2011). Even though advertisement becomes more personalized web site visitors do feel uncomfortable with companies tracking their online affairs. (Beales, 2010; Goldfarb & Tucker 2011).

2.2 Statistics

With assistance of statistics it will be analyzed in which area the problems of consumers and their privacy occur. If this is obtained, with the application of multiple online behavior models in section 2.3 , the problem areas can be theoretically analyzed in order to come up with a decent recommendation on how consumers actually are behaving and how marketers can respond to this.

(TRUSTe, 2008) Areas of consumer concerns regarding to online privacy in OBA:

Advertising relevance:

‘ Of 87% respondents, 25% of the ads were actually personalized.

‘ 64% would only choose to see ads of online stores they are familiar with and trust.

‘ 72% find OBA intrusive if it’s not to their specific needs.

Awareness of OBA:

‘ 40% are familiar with OBA and a higher percentage knows of tracking. 71% knows their browsing data is gathered by third parties.

Attitudes toward OBA:

‘ 57% say they are not comfortable with collecting browsing history for customized advertising.

‘ 54% state they delete their cookies 2-3 times monthly.

‘ 55% are willing to get customized online ads in order by filing in an anonymous form. 19% did not. 37% would still fill out a form about products services and brands to buy even if they aren’t held anonymous.

‘ 40% of participants in our online study agree or strongly agree they would watch what they do

online more carefully if advertisers were collecting data. (McDonald & Cranor, 2010)

Intent to take measures:

‘ 96% want to take measures on protecting their privacy settings. However respondents don’t state they don’t want to be part of OBA at al. even 56% won’t click to reduce unwanted ads. And 58% would not register in the don’t-follow-me registration.

From these statistics it can be obtained that the majority of respondents of this study have negative attitudes towards privacy matters in OBA. However referring to the first heading advertising relevance and the last heading; intent to take measures, it could be stated that the majority of consumers do prefer some form of OBA. This implies cookies are needed. Therefore the problem area as earlier discussed lies more in that consumers do not know enough about opt-out and are not confident with privacy statements. Therefore knowledge and trust will be the major factors to be analyzed in order to see how companies can overcome this issue.

These factors which will be analyzed using models are of great importance. This because TRUSTe states that knowledge and trust are great factors influencing online behavior since there is an increased level of awareness that website visitors are being tracked, to be provided with customized advertisements. Even though they are aware that they are anonymous because their name is not obtained (google.com, 2015; J. Zuiderveen 2011) they do not feel comfortable with them being followed and targeted. Therefore website visitors strongly prefer to limit and have more control on OBA practices. (TRUSTe, 2008).

2.3 Models concerned with consumer behavior

2.3.1 Knowledge: Consumer Privacy States Framework

In order to assess to what extent consumers consider their privacy as important and what are the factors that influence this degree, the use of a Consumer Privacy States Framework will be applied. This framework is derived from the Journal of Policy & Marketing and established by G. Milne and A. Rohm. According to G. Milne and A. Rohm, this framework focuses on 2 dimension. The dimensions of this framework are a reaction to consumers privacy concerns and their willingness to provide marketers with their personal information (Sheehan & Hoy 2000; Milne & Rohm 2000). These dimensions are awareness of data collection and knowledge of name removal mechanism.

According to this model privacy is only present in cell 1. In this stage consumers are aware that their personal information is being gathered. Next to that they know how to opt-out. In this stage consumers are more satisfied and react more positive towards direct marketing relationships (Milne & Rohm 2000). Research has shown that consumers are willing to exchange private information for benefits. Consumers will give more information to digital marketers if there are perceived long term benefits. Next to that, if consumers are able to control their privacy, consumers are more willing to give up their personal information. (Ariely, 2000).

Table 1: Consumer Privacy States Framework (G. Milne and A. Rohm, 2000)

Consumer is knowledgeable about name removal mechanisms Consumer is not knowledgeable about name removal mechanisms

Consumer is aware of data collection Cell 1: Privacy exists Cell 2: Privacy does not exist

Consumer is not aware of data collection Cell 2: Privacy does not exist Cell 4: Privacy does not exist

( Note: opt-out options in the study of 2008 is used as a similar concept as name removal mechanisms in the study of 2000)

Research has shown that 34% of the population is positioned in cell 1, 74% was aware of data collection and 45% knew how to handle name removal mechanisms. This research has shown that organizations need to educate consumers more intensively about name removal mechanisms (Culnan 1995; Milne 1997). Nowadays this issue is still the case. According to TRUST E marketwire.com (2008) 70% of consumers is aware of data collection and 40% knows about opt-out options.

On the other hand, Wood & Quinn (2003) evaluated the effects on attitudes of forewarnings. If consumers are pre-informed on what is the function of cookies, biased thinking can be encouraged which will generate negative attitudes to its function. However, if people are not provided with information on how to opt-out or opt-in possibilities they are more likely to share their personal information. The cookie-icon could be seen as a pre-warning. This makes consumers see a pre-warning as being warned for something which makes their behavior turn to resistance. This resistance occurs because individuals will feel invaded in their privacy. Next to that consumers do not feel comfortable with others knowing their preferences. Therefore, according to Jacks and Devine (2000), resistance occurs in the form of keeping personal freedom. If resistance occurs, resistance strategies could be applied.

According to Jacks and Cameron (2003) consumers could respond with resistance strategies. These strategies are built as described below. The individual could show resistance by not responding to the customized advertisement message or by leaving the situation as it is. This is called selective exposure. Either the receiving individual could immediately start making counter arguments. In this case counter arguing finds place. On the other side, attitude bolstering implies the individual strengthens its own original view without directly making up counter arguments. Source derogation implies insulting the source or reject the validity of the source. In case of social validation, individuals resist the customized message and bring to mind others who share the same viewpoint. In case of negative effect, individuals get angry because their personal information is utilized without the source indicating what it is used for.

Eventually resistance doesn’t have to appear when getting a pre-warming in the form of an icon. Instead of resistance strategies, individuals could choose to make adjustments to their cookie settings or choose to register to not be followed anymore by signing in an authorized non-registration register. As explained under the heading statistics it could be stated that indeed 40% would take measures if their personal information would be collected (TRUSTe, 2008), therefore resisting strategies play a significant role.

2.3.2 Rank order table: Trust

Next to this framework Earp & Baumer (2003) introduced a rank order of most influential factors affecting consumer behavior regarding their privacy. The table below states that consumers that have high confidence in privacy practices of a website are more willing to provide personal information.

Table 2: Rank ordering of stated influential factors in confidence of privacy practices of web site . Bron: J. Earp and D. Baumer, 2003

Rank of most influential factors Factor

1 Company name

2 Option ‘to opt out’

3 Presence of a privacy policy

4 Presence of a web seal

5 Design of the site

76% of respondents from this study showed that having the ability to opt-out as an important factor for having reliability in the privacy practices of the website. However according to research 87.5% of consumers expect detailed information about privacy policies when visiting websites, while only 54% of this amount is actually reading these privacy policies. 66% of this study showed a rise in reliability if a website provides comprehensive privacy policies.(Earp & Baumer, 2003). Next to that consumers believe websites having a comprehensive privacy policy, will make the website always live up to its policy (Ant??n et al. 2002). This again implies that most internet users prefer assurance of privacy policy but are less apprehensive about what the policy actually says (J. Earp and D. Baumer, 2003). Therefore trust and confidence plays a more important role on providing private information than what the policy actually says.

2.3.3 The consumer profile

The consumer profile is relevant to this particular situation in the sense that the effect of consumers’ perceptions of OBA can be measured. Risk and privacy invasion are major areas of concern among consumers and therefore it could be analyzed to what extent these perceptions will affect their online behavior. By making an analyses, companies could get more focused on what areas to improve in order to not deal with privacy issues in future.

The first factor in the consumer profile that should be analyzed is that security and privacy information should be considered. As described earlier, consumers need to be secured that accurate privacy information is provided to them, however in reality this doesn’t make them read it. Referring back to the rank order table, 66% of website visitors expect proper privacy disclosure but only 54% of the website visitors is actually reading it (Earp & Baumer, 2003). Therefore it could be stated that customers are not focused on explicitly security but only on the idea of security. Therefore the issue that evolves around privacy is more on the security of privacy information but not specifically the content of privacy information. Therefore websites with just being able to demonstrate proper regulations on privacy will have greater chance of creating customers having a more positive perception on online privacy practices. Next tot that according to C. Hoofnagle (2010) internet users rarely read privacy statements. However on the other side, if consumers are better informed on opt-out options there is a possibility this knowledge will create resistance as earlier described (Wood & Quinn, 2003) .

Secondly risk plays an important role in behavior on consumers online. The degree of online sales effectiveness can be raised substantially if the perception of risk is reduced. If customers would read the stipulations it would even be questionable whether they realize the consequences of gathering and analyzing their personal information by cookies (Barocas & Nissenbaum, 2009). Even if anonymized information can be linked to an individual, this individual might think there is a small chance of this happening (Zuiderveen Borgesius, 2011). Therefore again privacy regulations are supposed to just be there to gain security. Risk is sometimes not even considered in its essence but more the perception of risk. Because if web site visitors think there is a small chance of third party’s getting access to information perceived personal, evaluation of risk is seemingly poor.

Third, trust is highly correlated to risk. Increased trust is the consequence of a decrease in perceived risk. This will cause positive beliefs in the business’s online reputation. Fourth, Perceived usefulness. This incorporates the time and effort required for an individual to educate itself on how to opt-out (Perea et al., 2004). Website visitors only have limited knowledge on technology, information and communication technology. Consumers need to understand what is written in privacy statements and what they actually sign an agreement with (Perea et al., 2004). As earlier described, educating web site visitors more by forewarnings can create resistance, which will negatively impact their purchasing behavior (W. Wood & J. Quinn 2003) .

At last the ease of use also has significant impact on consumers their online behavior. Using a new technology need to be free of effort. If an internet user visits a website, he or she experiences this as very time consuming to completely analyze the statement. This makes the website visitor not read it and either state they do not care about their privacy. In statics this is about 3%. On the other side incorporating the law, it cannot be assumed that website visitors not reading the privacy statements willingly accepts the browser settings of cookies. Therefore according to article 2 subsection h Data protection directive which demand for permission a free, specific and on information founded volition will cause considerate problems. (Group privacy protection 29, 2008)

3 What strategies should marketers apply to respond to current privacy concerns regarding cookies in OBA?

3.1 Coercive vs. non- coercive strategies

Organizations that deal with online privacy concerns among consumers should realize whether they are adopting an coercive influence strategy or a non-coercive influence strategy. The coercive influence strategy involves web sites offering incentives to consequently make consumers increase self-disclosure (provide more personal information) (Acquisti & Varian, 2005). Incentives to provide personal information can be categorized into economic incentives such as promotions, discounts and coupons. Non-economic incentives are for instance translated into customization, personalization and access to exclusive content. Threats indicate a penalty or exclusion of benefits for noncompliance. Therefore if the request is not honored, the website visitor cannot make use of the content of the website. For example, NPO, like more websites demand from customers to provide their personal information to get the ability to register on the website and to access specific information on the website. This method of data gathering is aimed at punishing people who refuse to provide their personal information by not providing them with the website content they requested (Sheehan, 2005).

Non-coercive influence strategies. In this case NPO would still take the same actions but without making use of rewards or penalties. For example, a website could explicitly demand the web site visitor by using web forms for these visitors to provide their personal information without the use of non-economic incentives, in this case providing customized advertisement. Instead of providing incentives, NPO could start providing recommendations, such as making the consumer believe, if they provide personal information it can improve their experience on the website (customization) and therefore making the website still reach its original aim. In this case websites can make use of information provision, where they can provide web site visitors with privacy policies which states how and why information will be collected (Milne, Rohm and Bahl, 2004) . Next to that they will provide seals of trust to provide website visitors the guarantee of privacy protection. (Gabbish, 2011).

The main focus for websites such as NPO is identifying strategies for gathering information from website visitors that provide the opportunity to reduce privacy concerns and increase consumers’ trust. According to Payan & McFarland (2005) the application of non-coercive influence strategies have shown positive relational effects. On the other side, coercive strategies have shown the opposite effect. According to Hausman & Johnston (2009) non- coercive strategies have a positive influence on trust while coercive strategies show the opposite. Privacy literature also shows that privacy policies and seals make concerns on privacy decrease and trust to rise. Rewards and threats on the other side makes trust decrease and privacy concerns to increase (Gabbish, 2011).

3.2 Application of the structural model of privacy policy

For companies to reduce the chances of the adoption of resistance strategies from consumers, they could opt for making use of a structural model of privacy policy, privacy concern, trust and willingness to provide personal information. This model showed that if applied properly companies can increase consumer confidence and willingness to provide their personal information (Wu et al., 2012). The model consist of the parameters notice, choice, access, security and enforcement.

Source: Wu et al., 2012

Notice is the most important parameter, stating that consumers should be informed about the collection of personal data before personal data is gathered from these individuals (Wu et al., 2012). In the NPO case, personal data from consumers was collected from consumers without them being aware of it (Pijnenburg, 2014). Choice gives consumers the ability to control the personal data obtained from them. Access gives web site users the ability to have insight into their data. Next to that, website visitors can check whether the data collected from them is correct and complete. Security is concerned with checking whether information is secure and correct.

In order for data integrity to occur, web site owners and third-parties should take measures that provide consumers the ability to have insight into data, erase information and change it to anonymous characters. Enforcement is one of the most important parameters of privacy protection, since privacy can only be assured if there are measures that enforce privacy protection (Wu et al., 2012).

According to Wu et al. (2012) the study came to the conclusion that security ranks highest in concerns of consumers. If the web site owner is aimed at increasing trust among web site visitors, in order for them to provide more personal information, they increase their focus on the provision of security and security data along with creating privacy statements.

This study done by Wu et al. 2012 did research on the relationship of the content of privacy policy to trust and online privacy concern. There are moderating variables that can affect the relationships. These moderating variables tend to describe consumer behavior. Therefore these factors shouldn’t be left out of the original model. The moderating variables that have been researched are cross-cultural effects, age and gender. According to this study, culture has an important moderating effect on the behavior of website visitors to the content of Privacy Policy. Some cultures show a rise in trust in websites when they give consumers access to their data and when their personal data is secure. Differences in cultures have a significant function in the behavior of website users and have influence on their choices in activities online. Gender also influences privacy concerns and willingness to provide personal information. Woman show more openness and therefore more self-disclosure. However they have higher needs for privacy (Wu et., al 2012). Age on the other hand could also have significant impact on the relationship of content of privacy policy and privacy concern/trust. Research showed, the older people get, the more worried they are on their online privacy.

3.3 Web bugs

According to Goldfarb &Tucker (2010) web bugs can be described as 1×1-pixel parts of a code that give online advertisers the ability to follow consumers online. Web bugs are not similar to cookies since they are not visible to the website user and are not saved on the computer of the website visitor. A consumer is therefore not aware of being tracked, unless they analyze the html. code of the webpage. Web bugs track the consumer from website to website. Next to that, web bugs are able to track how far a visitor scrolls down a page. This will have a positive impact on the collection of the preferences of the website visitor (Goldfarb &Tucker, 2010). According to Murray &Cowart (2001) web bugs are used by approximately 95% of top brands. Since consumers are not aware of data collection, privacy concerns will not occur as much as with cookies. However if the law would make websites inform consumers about web bugs, privacy concerns could rise again (Goldfarb &Tucker, 2010). Therefore web bugs could be seen as an alternative for cookies. But if the Privacy Directive adjusts the law, web bugs would become similar to cookies, with the same privacy concerns as consequence.

4 Conclusion/ Recommendation

The reason why this paper focuses on NPO is because in July 2014 they received a penalty by the Dutch authority for consumers and markets known as ACM (acm.nl, 2014). The NPO placed cookies which track the web site visitors without giving accurate information to its visitors. ACM claimed the NPO was not complying with article 11.7A of the Dutch telecommunication, neither complying with the Dutch data protection act. The NPO is only allowed to track consumers if consent of the web-site visitor is given willingly and unambiguously, according to the information that is disclosed (Fouad, 2014). Referring back to section chapter 2 it can be obtained that the NPO didn’t comply with laws referring to article 2.h. In 2011 article 2.h came to attention that with ‘unambiguous’ permission the website is not allowed to make to quick assumptions that the website user gives permission to make use of personal information (European commission, 2003; 2006).

From the models of factors influencing consumer behavior in section 2.3, it can be obtained that the Consumer Privacy States Framework states that according to consumers if the consumer is aware of data collection and the consumer is knowledgeable about opt-out practices, it could be stated that privacy exists, therefore NPO went wrong in not giving consumers the idea that privacy exists.

The rank order table in section 2.3.2 statistics showed that consumers do need assurance from websites that a website have a comprehensive privacy policy. However websites having privacy policies don’t make consumers actually read them (Earp & Baumer, 2003; Ant??n et al. 2002). Therefore consumers not feeling knowledgeable about their rights show resistance. This can be emphasized by figures showing that the cookie wall of NPO is perceived as a pressure. They actually state; if you don’t accept my cookies you can’t visit my website, with the consequence that they lose visitors. Other businesses use a softer approach with the risk of a loss of personal information. This cookie wall has resulted in a loss in turnover of 0-5% in short term. The NPO expects on the long term a rising trend in visitors on their website (Douma & Verspreek, 2014).

Referring back to the Customer profile model in 2.3.3, influencing factors in consumer behavior online show that if consumers feel more secure on how to control their privacy online they will show a more positive perception about OBA. However on the other side, more control would mean more resistance (Wood & Quinn 2003) . Next to that actual risk is not really experienced but the perception of risk.

Therefore NPO should in the future focus on having their privacy statements accurate and clear and create confidence among website visitors. In the end, the consumers are not specifically worried about their privacy and the detailed information in privacy statements but more on their degree of control, what all 3 models confirm.

In order for consumers not to choose to turn to resistance strategies, influence strategies could be applied. Some of these influence strategies could be considered as manipulative. However on the other side, other influence strategies could increase consumers’ perception of security (Kirmani & Campbell, 2004). The effect of influence strategies is not similar to all individual website visitors. Differences may appear in privacy concerns, consumers ‘trust and their willingness to provide personal information (Milne et al., 2009). Research has shown that non-coercive strategies, such as placing privacy policies on a website, decreases concerns on disclosure of personal information. However on the other side, coercive strategies offering a reward would increase privacy concern and decrease self-disclosure willingness (Andrade et al., 2002). Therefore it is recommended to NPO to adopt a non-coercive strategy to increase trust and willingness to provide personal information.

Referring back to the structural model of Wu et al. (2012) the study came to the conclusion that security ranks highest in concerns of consumers. If the web site owner is aimed at increasing trust among web site visitors, in order for them to provide more personal information they increase their focus on the provision of security and security data along with creating privacy statements or building the website. Therefore again, this strategy shows that NPO should increase attention to the parameter trust in order to increase willingness to provide personal information. This strategy highly correlates with the non-coercive strategy. In the coercive strategy NPO would put too much focus on trying to let customers know about the customization provided which would increase resistance and reduce trust. The non-coercive strategy and (the importance of trust in) the structural model both focus on providing security to increase trust and in turn reach a higher willingness to provide personal information.

The alternative of using cookies could be the application of web bugs. However the application of web bugs is only a short term solution until privacy regulations will change. When privacy regulations will change web bugs would become similar to cookies. Therefore it is recommended that NPO as an example organization should not turn to this strategy.

MPPT CONTROLLER UNDER PARTIAL: essay help online free

ABSTRACT: Maximum Power Point

Tracking (MPPT) is the most important part

of an energy conversion system using

photovoltaic arrays. Maximum power point

tracking (MPPT) techniques are used in

photovoltaic (PV) systems to maximize the

PV array output power by tracking

continuously the maximum power point

(MPP) which depends on panel temperature

and on irradiance conditions. The power

voltage characteristic of PV arrays

operating under partial shading conditions

exhibits multiple local maximum power

points (LMPPs). In this paper, a review of

various characteristics curves of MPPT

controller under partial shading conditions

has been presented to analyze the

performance of MPPT controller under

such conditions.

Keywords: Maximum Power Point

Tracking (MPPT), Global Maximum Power

Point (GMPP), Local Maximum Power

Point (LMPP), Multiple Maxima, Partial

Shading, Photovoltaic (PV).

I. INTRODUCTION

A PHOTOVOLTAIC (PV) cell is an

electrical device that converts the energy of

light directly into electricity through PV

effect. PV cells have a complex relationship

between solar irradiation, temperature, and

total resistance, and exhibit a nonlinear

output efficiency characteristic known as

the P’V curve. Therefore, maximum power

point tracking (MPPT) techniques should be

developed in PV systems in order to

maximize the output power of PV systems.

Nowadays, there have been many MPPT

methods reported in the literature, such as

hill climbing, perturb and observe

incremental conductance (INC) and ripple

correction.

However, when there is multiple local

power maxima, from partially shading or

from installation on a curved surface,

conventional MPPT techniques do not

perform well. Multiple maxima may occur

due to bypass diodes, which are used to

avoid hot spots from forming when some

cells in a module or some modules in a

string receive less irradiance than others.

Without the remediation of power

electronics, the lost energy due to partial

shading can be significant. Thus, it is

imperative to utilize MPPT techniques that

reliably track the unique global power

maximum present in shaded arrays.

Some researchers have proposed global

maximum power point tracking (GMPPT)

algorithms to address the partial shading

condition. It is observed that the peaks

follow a specific trend in which the power at

a peak point continues to increase until it

reaches the GMPP, and afterward, it

continuously decreases. The proposed

algorithm incorporates an online current

measurement and periodic interruptions to

address certain challenges associated with

rapidly changing insolation and partial

shading. This method can be an effective

solution to mitigate the effect of partial

shading. The simulation results, however,

obtained by measuring environmental

parameters and the actual case will be

drastically different, because the actual

characteristic of the solar panels depends on

many factors (e.g., light intensity,

temperature,

Fig. 1 PV array under different partial

shading conditions.

ageing, dust, and partial shading). In

addition, the method increases the PV

system cost in practical commercial

applications.

II. PARTIAL SHADING

CONDITIONS

Fig. 1 shows a PV array which has

four PV modules connected in series under

uniform insolation conditions. Fig. 2(a)

illustrates typical I’V and P’V curves for

the PV array under a uniform solar

irradiance of 1000 W/m2 on all the PV

modules. The traditional MPPT algorithm

can reach this peak and continue oscillating

around the MPP. The P&O method, e.g.,

perturbs the solar array voltage in one

direction in each sampling period and tests

the power change afterward. It is assumed

that initially PV array is operating at point

A, as shown in Fig. 2(a).

An operating voltage of the PV array

is perturbed in a given direction (from A to

B), and an increase in output power is

observed (PB > PA). This means that point B

is closer to the MPP than point A, and the

operating voltage must be further perturbed

in the same direction (from B to C). On the

other hand, if the output power of the PV

array decreases (from D to E), the operating

point has moved away from the MPP, and

therefore, the direction of the operating

voltage perturbation must be reversed (from

D to C). Through constant perturbation,

eventually the operating voltage will reach

and continue oscillating around the MPP

level.

However, in some practical

conditions, the series strings of PV modules

are not under the same solar irradiance

condition. The partial shading condition is a

common situation due to the shadows of

buildings, trees, clouds, dirt, etc. Fig. 1

shows several different partial shading

situations. Under the partial shading

condition, if there is one module in a PV

string that is less illuminated, the shaded

module will dissipate some of the power

generated by the rest of the modules. It

means that the current available in a series

connected PV array is limited by the current

of the shaded module. This can be avoided

by using bypass diodes which can be placed

in parallel with the PV module.

The method of using bypass diodes

allows the array current to flow in the

correct direction even if one of the strings is

completely shadowed. Bypass diodes are

widely implemented in commercial solar

panels. Because of bypass diodes, multiple

maxima appear under the partial shading

condition. The P’V curve of PV array in

Fig. 1 possesses multiple maxima under the

partial shading condition, as shown in Fig. 2

(b). The unshaded modules in the sample

PV array are exposed to 1000 W/m2 of

solar insolation and the shaded module is

exposed to 400 W/m2 of solar insolation.

There are two observed peaks in the P’V

curve, because of the natural behavior of the

bypass diode and PV array connection

inside the module. Point A is the GMPP,

while point B the local maximum power

point (LMPP). When the area covered by

the shadow changes, the P’V curve and the

location of GMPP also changes, as shown in

Fig. 2(c) and (d). Under these conditions,

traditional algorithms can only track either

of the two MPPs, and cannot distinguish

between GMPP and LMPP.

Continuing with the P&O method as

an example, both points satisfy the

conditions to be the ‘MPP.’ If the operating

point obtained by the PV array algorithm is

LMPP, the output power is significantly

lower. Some researchers proposed a global

scan method to obtain the PV output curves.

Then a complex algorithm is required to

calculate the GMPP of the curves. This

method is able to obtain the GMPP, but it

cannot determine whether the PV cell is

operating under shading conditions, and

blindly and constantly scans for the MPP,

wasting the output energy. For these

reasons, a new improved MPPT method for

the PV system under the partial shading

condition is proposed in this paper.

Fig. 2 P’V and I’V characteristics curves

of a PV array under different partial

shading conditions

III. ANALYSIS OF

CHARACTERISTIC CURVES

UNDER PARTIAL SHADING

CONDITIONS

In order to avoid blind global scan,

methods to determine the presence of partial

shading are essential. It is noted that when a

series of PV array is under the identical

solar irradiance condition [Fig. 1], every PV

model works as a source, and all modules

are identical in their voltage, current, and

output power at any time. But this state

changes when there is shadow. Fig. 1 is an

example in the following analysis. The

models in the series array are exposed to

two different solar irradiances, and the solar

irradiation levels are 1000 and 400 W/m2,

respectively. The voltages of the modules

that are exposed to different irradiation

levels are completely different.

The two peaks on the P’V curve are

divided into two separate parts, as shown in

Fig. 2(c). Part A is the curve containing the

left peak (curved A’C), and part B is the

curve containing the right peak (curve C’B’

E). In part A, the current of the PV array IPV

is greater than the maximum current that the

PV module can

Fig. 3 Every module output voltage with

array output power.

(a) Unshaded module. (b) Shaded

module.

produce under the shade (M3 and M4);

therefore, the current will flow through the

bypass diode of each module. At this stage,

only PV M1 and M2 are supplying power,

and PV M3 and M4 have been bypassed by

the diodes. The characteristic curves of the

PV module voltage with output power are

shown in Fig. 3(a) and (b). The voltages of

PV M3 and M4 are approximately negative

0.7V (the diode’s forward voltage drop) in

part A, as shown in Fig. 3(b).

Therefore, the module voltages

being equal to the negative of the diode’s

forward voltage can be used as one effective

way to estimate partial shading condition. In

part B, all PV modules are supplying power,

but the unshaded and shaded modules are in

different working conditions. Because the

PV modules receive different amounts of

solar radiation, the voltages of the PV

modules are different. In part B (curve C’

B’E), the voltage of the unshaded modules

is greater than that of the shaded modules,

as shown in Fig. 4. It is evident that this is

another indicator to efficiently identify

partial shading. Following the above

analysis, some of the observations are listed

as follows.

1) I’V curves under partial shading

conditions have multiple steps, while the

P-V curves are characterized by multiple

peaks.

2) The number of peaks is equal to the

number of different insolation levels

irradiated on the PV array, and any peak

point may be the GMPP.

Fig 4 Array output power with unshaded

module output voltage and shaded

module output voltage.

3) The voltages of PV modules that receive

different solar radiations are different.

4) The voltage of the PV module that is

bypassed by a diode is equal to the negative

of the diode’s forward voltage drop.

CONCLUSION

In this paper, a review of concepts &

developments in the field of MPPT has been

shown. Also various partial shading

conditions have been briefly reviewed. The

comparison between this various conditions

of partial shading has been summarized with

the help of various characteristic curves.

Finally it is concluded that conventional

MPPT techniques have disadvantages like

energy loss, not able to determine partial

shading conditions, etc. Majority of these

problems can be eliminated by improved

MPPT controller method. Therefore

application of Improved MPPT controller

method now a day’s not limited up to

generation level but research work

suggested that it is having ability to replace

the conventional MPPT methods too in near

future.

REFERENCES

[1] W. Xiao and W. G. Dunford, ‘A

modified adaptive hill climbing MPPT

method for photovoltaic power systems,’ in

Proc. Power Electron. Spec. Conf.

(PESC’04), vol. 3, Jun. 2004, pp. 1957

1963.

[2] N. Femia, G. Petrone, G. Spagnuolo, and

M. Vitelli, ‘Optimization of perturb and

observe maximum power point tracking

method,’ IEEE Trans. Power Electron., vol.

20, no. 4, pp. 963’973, Jul. 2005.

[3] F. Liu, S. Duan, and F. Liu, ‘A variable

step size INC MPPT method for PV

systems,’ IEEE Trans. Ind. Electron., vol.

55, no. 7, pp. 2622’2628, Jul. 2008.

[4] J. W. Kimball and P. T. Krein,

‘Discrete-time ripple correlation control for

maximum power point tracking,’ IEEE

Trans. Power Electron., vol. 23, no. 5, pp.

2353’2362, Sep. 2008.

[5] H. Patel and V. Agarwal, ‘MATLAB

based modeling to study the effects of

partial shading on PV array characteristics,’

IEEE Trans. Energy Convers., vol. 23, no.

1, pp. 302’310, Mar. 2008.

[6] N. Thakkar, D. Cormode, V. P. A. Lonij,

S. Pulver, and A. D. Cronin, ‘A simple non

linear model for the effect of partial shade

on PV systems,’ in Proc. IEEE Photovoltaic

Spec. Conf. (PVSC), 2010, pp. 2321’2326.

[7] Yang Chen, Keyue Ma Smedley, ‘A

Cost-Effective Single-Stage Inverter With

Maximum Power Point Tracking’, IEEE

Transactions Power Electronics, Vol. 19,

No. 5, pp. 1289-1294, Sep. 2004.

[8] Eduardo Rom??n, Ricardo Alonso, Pedro

Iba??ez, Sabino Elorduizapatarietxe &

Dami??n Goitia, ‘Intelligent PV Module for

Grid-Connected PV Systems’, IEEE

Transactions Industrial Electronics, Vol. 53,

No. 4, pp. 1066-1073, Aug. 2006.

[9] Hiren Patel, Vivek Agarwal, ‘Maximum

Power Point Tracking Scheme for PV

Systems Operating Under Partially Shaded

Conditions’, IEEE Transactions Industrial

Electronics, Vol. 55, No. 4, pp. 1689-1698,

April 2008.

[10] Hiren Patel, Vivek Agarwal,

‘MATLAB-Based Modeling to Study the

Effects of Partial Shading on PV Array

Characteristics’, IEEE Transactions Energy

Conversion, Vol. 23, No. 1, pp. 302-310,

Mar. 2008.

[11] Jonathan W. Kimball, Philip T. Krein,

‘Discrete-Time Ripple Correlation Control

for Maximum Power Point Tracking’, IEEE

Transactions Power Electronics, Vol. 23,

No. 5, pp. 2353-2362, Sep. 2008.

[12] Jung-Min Kwon, Bong-Hwan Kwon,

Kwang-Hee Nam, ‘Grid-Connected

Photovoltaic Multistring PCS with PV

Current Variation Reduction Control’, IEEE

Transactions Industrial Electronics, Vol. 56,

No. 11, pp.4381-4388, Nov. 2009.

Learning theories – behavioural, social & cultural, constructivism, cognitive

Learning is defined as the permanent change in individuals mind, voluntary or involuntary. It occurs through an experience that can bring about a relatively permanent change in an individual’s knowledge or behavior. Behaviorist defines learning as the changes in an individual’s mind resulting in a permanent change. It is learning that takes place intentional or unwillingly in individuals. Cognitive psychologist defines learning as the changes in knowledge that can be an internal mental activity that cannot be observed directly. Learning involves obtaining and modifying knowledge, skills, strategies, beliefs, attitudes and behaviors to understand old or new information. Individuals learn skills from experiences that tend to take the form of social interactions, linguistic or motor skills. Educational professionals define learning as an ‘enduring change in behavior or in the capacity to behave in a given fashion which results from practice or other forms of experience’.

One may ask how does learning happen? Learning happens every day to every individual, it doesn’t only happen in the classrooms, colleges or universities buildings but it can happen anywhere and every day. Learning can occur through interacting with others, observing or simply as just listening to a conversation. Learning happens through experiences good and bad, or ones that can provoke an emotional response or simply offer a moment of revelation. Behaviorist and cognitive theorist believed that learning can be affected by the environment an individual resides but behaviorist focused more on the role of the environment and how the stimuli is presented and arrange and the responses reinforced. Cognitive theorist on the other hand agrees with behaviorist but tend to focus more on the learners abilities, beliefs, values and attitudes. They believe that learning occurs by consolidation which is the forming and strengthening of neural connections which include the factors organization, rehearsal, elaboration and emotional. Learning occurs in many ways, psychologist believe that learning is the key concept of living whether it’s intentional or unintentional which is why they came up with the learning theories.

Learning theories are considered theoretical frameworks in describing how information is contain, refined and maintain during learning. Learning is an important activity in the lives of individuals; it is the core of our educational process, even though learning begins out of the classroom. For many years psychologist sought to understand what is learning, the nature of it, how is it transpired and how individuals influence learning in others through teaching and similar endeavors. Learning theories tend to be based on scientific evidence and more valid than personal opinions or experiences. There are five basic types of theories used in educational psychology which are: Behavioral, Cognitive, Social & Cultural, and Constructivism.

Behavioral Theory

The behavioral approach is the behavior view that generally assumes that the outcome of learning is the change in behavior and emphasizes the effects of internal events on an individual. In the behaviorist approach, they believed that individuals have no free will, and that the environment an individual is place in determines their behavior. They believe that individuals are born with a clean slate and that behaviors can be learned from the environment. The learning theories from the behaviorists Pavlov, Guthrie and Thorndike have historical importance on learning. Although they may differ each theory has its own process of forming associations between stimuli and responses. Thorndike believed that responses to stimuli are strengthening when it is followed by a satisfying consequence. Guthrie reasoned that the relation between stimulus and responses is established through pairing. Pavlov, who developed the classical conditioning, demonstrated how stimuli can be conditioned to obtain certain responses while being paired with another stimulus. The behavior theory is expressed in conditioning theories that explains learning in the terms of environmental events but is not the only conditioning theory.

B. F. Skinner developed the Operant conditioning; this form of conditioning is based on the assumptions that the features of the environment serves as cues for responding. He believed that we learn to behave in certain ways as we operate on the environment. In operant conditioning reinforcement strengthens the responses and increases the likelihood of the occurring when the stimuli are present. The operant conditioning is a three-term contingency that involves the antecedent (stimulus), the behavior (response) and the consequences. Operant conditioning involves consequences which can determine how individuals respond to environmental cues. Consequences can be either good or bad for individuals, it can reinforce behavior that increases it or a reinforcement that decreases behavior. There are other operant conditioners such as generalization, discrimination, primary and secondary reinforcements, reinforcement schedules and the premack principle.

Shaping is another form of operant conditioning, it is the process used to alter behavior in individuals. Shaping is the successive approximations which involves the reinforcing progress. It is the complex behaviors that are formed by the linking of simple behaviors in the three-term contingencies. This operant conditioning involves self-regulation which is the process of obtaining an individual stimulus and reinforcement control of themselves.

Cognitive Theory

The cognitive theory focuses on the inner activities of the mind. The cognitive theory states that knowledge is learned and the changes in knowledge make the changes in behavior possible. Both the behavioral and cognitive theory believe that reinforcement is important in learning but for different reasons. The behaviorist suggests that reinforcement strengthens responses but cognitive suggest that reinforcement is a source of feedback about what is likely to happen if behaviors are repeated or changed. The cognitive approach suggests an important element in the learning process is the knowledge an individual has towards a situation. Cognitive theorist believe that they information we already know determines what we will perceive, learn, remember and forget.

There are three main theorist of the cognitive development Gestalt, Kohler and Koffka. Gestalt learning theory approach proposes that learning consists of grasping of a structural whole and not just a mechanistic response to a stimulus. The main concept of his theory was that when we process sensory stimuli we are aware of the configuration or the overall pattern which is the whole. Kohler theory stated that learning can occur by a ‘sudden comprehension’ as to gradually understanding. This theory could happen without any reinforcement and there will be no need for review, training or investigations. Koffka theory suggested that he supported the fact that animals are can be participants in learning because they are similar to humans in many ways. He believed that there was no such thing as meaningless learning, and that the idea interdependent of facts was more important than knowing many individual facts.

Social & Cultural theory

The social and cultural theory is based on how individuals functioning are related to cultural, institutional and historical context. Vygotsky was a psychologist in Russia who identified the Social & Cultural theory also known as sociocultural theory. The Sociocultural theory is known as the combining theory in psychology because it discussed the important contributions society makes on an individual development and cognitive views of Piaget. The theory suggested that learning occurs between the interactions of people. Lev. Vygotsky believed that Parents, Caregivers, Peers and culture played an important in the development of a high order function. According to Vygotsky ‘Every function is the children cultural development that appears twice: firstly on the social level, secondly on an individual level. In the social cultural theory tends to focus not only on how adults or peers influence learning but how an individual culture can impact how learning takes place.

According to Vygotsky children are born with the basic constraints on their mind. He believed that each culture provides ‘tools of intellectual adaptation’ for each individual. Theses adaptation allows children to use their basic mental ability to adapt to their culture for example a culture may utilized tools to emphasize on memorization strategies. Vygotsky was a brilliant man, he worked along with Piaget in developing the cognitive theory their theories differ in certain ways. Firstly Piaget theory was basically based on how children interactions and explorations influenced development, Vygotsky placed greater emphasis on the social factors that influence development. Another difference is the Vygotsky suggested that cognitive development can be different between cultures while Piaget theory suggested the development in universal. There is one important concept in the sociocultural theory known as the zone of proximal. The Zone of proximal is considered to be the level of independent problem solving and a level of potential development, through problem solving under the guidance of an adult or with peers. It includes the skills that a person cannot understand or perform on their own yet, but is capable of learning with guidance.

Constructivism Theory

The constructivism learning theory is defined as how learners or individuals construct knowledge from pervious experiences. Constructivism is often associated with a pedagogic approach that often promote learning or learning by doing. Constructing is known as the meaning for learning because constructivism focuses on the individual thinking about learning. The constructivist theory argues that individuals can generate knowledge from interactions between experiences and ideas. Constructivism examined the interactions between individuals from infancy to adulthood to try to comprehend how learning is done from experiences and behavior patterns. The constructivist theory is attributed to Jean Piaget who articulated the mechanisms by stating that knowledge is internalized by learners. Piaget stated that through the processes of adaptation the accommodation and assimilation, individuals can construct new knowledge from past experiences.

According Piaget theory of constructivism accommodation is the process of an individual reframing one’s mental view of the world and tries to fit in new experiences. Accommodation can be understood when failure leads to learning, as humans if we have an idea that the world works only one way and that way fails us then we will fail. In accommodation we learn from our failure or the failures of others. The constructivism theory describes how learning happens whether the individuals learn from using their experiences to understand information or by just following instructions to construct something. In both cases constructivism suggest that learner construct knowledge from experiences. The constructivism theory tends to be associated with active learning because5 individuals learn from experiences, something that was already did. Several cognitive psychologists argued that constructivist theories are misleading or can contradict findings.

As an educator I can facilitate learning by encouraging my students, helping them to develop to their fullest potential. As an educator I am compelled to vie and asses learning styles so that I can meet every student needs within the classroom. As an educator I want to be able to allow students to learn gradually. I would want my students to thrive academically and socially in and out of the classroom. From my understanding the four learning theories discussed in the paper all contribute to my understanding of learning. Despite all the different theories each theory gave me a new insight on learning occurs in and out of a class, college or university. From Behaviorist perspective view of learning is the change in behavior and emphasis of external events on an individual. For example Pavlov experiment in classical conditioning, where he taught dogs to salivate when they hear the tuning of a fork. If we used both conditioning theories with the classrooms can train students to behave and operant in the way they would want them to.

The theory that can be used in Music is the Behaviorist theory, I say this because music is the incorporating of knowledge and feeling. Music sets the atmosphere for an environment for example if a relaxing song is being played at home, that song puts the individual in a relaxing mood , in the behaviorist theory the environment influences the response of an individual so the relaxing song will evoke a relaxed response as done in Pavlov experiment of classical conditioning with the dogs that provoke salivating when hearing the tuning of a fork. In music classical conditioning is where students can be conditioned to like or enjoy a piece of music. For example if a classical song is being played that the students don’t know or like the teacher can play it repeatedly so they can get an understanding of it and eventually the students will enjoy the music because of the repetition of the song being played. There response to the song might be in the way of moving their bodies, tapping their feet or nodding their head.

Gas Chromatography

Science has developed a lot since the invention of fire by the ancient cavemen. Now a days we use various types of high-tech instruments for the precision of our scientific experiments. Gas Chromatography (GC) is one of the most common procedures used in analytical chemistry. It helps us analyze compounds through evaporation. One of the best features of this process is the compounds are not decomposed. Some of the more expensive GCs also detects the element for us, which later can be used to prepare pure compound from a mixture. Compounds present in a volatile liquid or gaseous solute are isolated after traveling through a coated column based on the substance’s size and intermolecular interactions. When a compound tends to bind to the column through intermolecular interactions, it takes a longer time to emerge than compared with a compound that does not tend to stick onto the column. The level of binding experienced between the substances and the column is determined based on the number and strength of intermolecular interactions between the two species. Substances that pass quickly through the column exhibit fewer intermolecular interactions with the column.

The development of Gas Chromatography first started with the Russians scientist Mikhail Semenovich Tswett in the year 1903.1 Later after many years German scientist Fritz Prior improved the techniques and was able to do solid state Gas Chromatography.2 And years later in 1941 Archer John Porter Martin got noble prize for his work on this field.3 He was also able to produce liquid-gas chromatography. Hence he laid the foundation of GC.

A gas chromatographer is the instrument used in this process. It helps to separate chemicals in a complex sample. The complex sample is passed through a narrow tube in gas stream, they pass depending on their physical and chemical properties and interact with a specific column filling. As all the compounds in that sample reach the end of the tunnel they are identified electronically. The main task of the column is to make the compounds pass through different times depending on their properties, this helps to detect them. In the beginning of this process a micro syringe is used to inject the known volume of the complex compound to be analyzed.

Gas Chromatography helps us to analyze contents of a substance very accurately. For example, it helps to determine the quality of products in chemical industry and determine toxic substances in a mixture. The more expensive GC instruments can measure up to picomoles of a substance in a 1 ml liquid sample.4 Gas Chromatography is used in forensic science, solid drug dose identification, arson investigation, paint chip analysis, toxicology cases, quantify various biological specimens and crime-scene evidence.5 With the expansion of science popular culture has adopted various sides of it. The use of gas chromatography is widely seen on TV shows, movies and books. But often their capabilities is exaggerated. For example, I myself use to watch CSI a lot when I was younger and they portray GC as a magical instrument which can identify any unknown sample within minutes and it would even give all the details of its origin and the day it was used. In real world it takes way longer. And to be more accurate the tests are to be performed quite a few times. Thus after doing the experiment in class I understood it is not at all the way described in the books.

Citations

1. Berichte der Deutschen botanischen Gesellschaft, vol. 24, pp. 316’326.

2. Martin, Archer, and Richard Synge. “Theory and Instrumentation If GC.” Theory and Instrumentation of GC Introduction to Gas Chromatography (n.d.): n. pag. CHROMacademy. Crawford Scientific. Web.

3. Lovelock, James. “Archer John Porter Martin CBE. 1 March 1910 – 28 July 2002.” Archer John Porter Martin CBE. 1 March 1910. Royal Society Publishing, 1 Dec. 2004. Web. 18 Apr. 2015.

4. “Biomedical Chromatography”. 2012 Journal Citation Reports. Web of Science (Science ed.). Thomson Reuters. 2013.

5. “How Is Gas Chromatography Used in Forensics?” – May 26 2014 12:00 AM. Chromatorgraphy Today, 25 Apr. 2014. Web. 19 Apr. 2015.

Animal testing: essay help

Every year thousands of animals are tested on for human safety and die of agonisingly long and painful deaths. Animal testing is a valuable asset in scientific research, drug development, health and medical research and cosmetic manufacturing. Animals are frequently used as a test subject since their body are very similar to the humans and will react in a similar way to different substances. Do you want innocent animals suffer painful deaths just for your beauty?

Not surprisingly, many types of animals tested on are mice, rats, rabbits, monkeys, dogs, cats, guinea pigs, hamsters, birds and mini pigs. Mice are the most popular animal to be tested on due to their size, ease of handling, fast reproduction rate, availability and low cost. 7342 mice are used in worldwide labs everyday- one every 12 seconds! They are widely considered to be the prime model of inherited human disease and share 99% of their genes with humans. In 2012, 3,045,690 mice, 262,641 rats and 28,677 other rodents were used in the UK alone (83.1% of total animals used that year). In 2011, the statistics show animal use totalled to 3,792,857 animals. This equates to 10,391 per day, or one every 8.3 seconds.

Even though many people oppose to the idea of animal testing, it has saved so many human lives and helped with our knowledge of different drugs, cosmetics(etc.). For example, we now have the technology for organ transplants. Organ transplants have improved the quality- and length- of life for millions of people across the world. For example, the first human cornea transplant took place over 100 years ago, following research using rabbits. In 2007, 2,403 people had their sight restored by cornea transplants. In addition, of the 5,000 people to develop kidney failure every year in the UK, 1 in 3 would die without a transplant. The surgery behind transplantation itself but also method of tissue-typing and anti-rejection drugs were developed using dogs, rabbits and mice from 1950 onwards. In 167, the first human to human heart transplant finally took place. Few people knew it took 60 years to prepare for this using animal research. Professor Christian Barnard carried out nearly 50 animal heart transplants over 4 years. Heart-lung transplants were later developed using monkeys.

The animals tested on can either survive; they won’t react to the product or the animal will suffer in great pain a die from a reaction from the product. They can be infected with a disease, poisoning, burning skin, brain damage, implanting electrodes into the brain, and blinding. They are abused and tortured. Over 100 million animals are burned, crippled, poisoned and abused in US labs every year. When used in cosmetic tests, mice, rats, rabbits and guinea pigs are often subjected to skin and eye irritation tests were chemicals are rubbed onto shaved skin or dripped into the eye without any pain relief given. Some tests can involve a killing of a pregnant animals and testing on their fetus. This is inhumane.

Animals in labs live stressful, monotonous, and unnatural lives of daily confinement and deprivation. The only changes in their lives may come from being called into a research or testing protocol- which may include an invasive experiment, or a procedure whose endpoint id death. Imagine spending your whole entire life as a hospital patient or a prisoner.

Would you pay a high amount of money for designer make-up when an animal has suffered in great pain and lost their life for something that isn’t necessary in life?

Use of fossil fuels and global warming

Due to global warming and other ill-effects of conventional energy sources, there is a need to produce energy by clean and environmental friendly ways. Fuel cell is one of the effective solution to produce energy without polluting the environment. There are various types of fuel cells viz. solid oxide, proton exchange membrane, alkaline fuel cells, etc. We are going to discuss more about solid oxide fuel cells. A solid oxide fuel cell (SOFC) is a device which generates electricity by using chemical energy stored in the fuel viz. hydrogen or hydrocarbons. SOFC consists of three parts ‘ electrolyte, anode and cathode. SOFCs have fuel flexibility, are low cost and have long-term stability. Operating temperature is the main disadvantage of the SOFC. To overcome this disadvantage, nanomaterials are used for electrolytes, anodes and cathodes of SOFC in order to improve their performance. Various fabrication and preparation methods are used to integrate different nanomaterials in the different parts of SOFCs. In this research, different fabrication methods along with their applications are discussed. [1, 2]

‘ Problem statement or gap:

The high use of fossil fuels like coal, gas and oil in last 100 years has increased the carbon dioxide and other poisonous gases emissions from power generation devices. This is considered to be an important factor for some of the environmental problems like global warming. The energy demand is always increasing and fossil fuels are depleting at faster rate. The power generation by using fossil fuels would not be sustainable. Thus, there is a need to find alternative or renewable energy sources that can meet this demand. The fuel cell is considered to be the one of the efficient and clean power generating device. Now, as we are considering fuel cells as replacement for the current power generating devices, the efficiency and durability of the fuel cells should be ideally equal or higher than those devices. To increase the efficiency and durability, different nanomaterials can be used in three different parts i.e. electrolyte, cathode and anode of fuel cells. Thus, in summary, there is need for better understanding of how these nanomaterials can be integrated on these parts in an efficient, fast and low cost ways. [1, 2, 4]

The research questions that the paper is going to address are:

1. What are some of the efficient methods to integrate the nanoparticles?

2. What are some of the applications of above methods along with the results to show the power output and durability of the fuel cells? [2, 3, 4, 5]

3. What future work needs to be done in order to improve the long term performance of the fuel cells? [6]

‘ Objectives of your research:

The objective of the current study is to provide a comprehensive review of literatures related to the application and advantages of each fabrication method of SOFCs. The fabrication methods discussed in the current study are photolithography process, sintering process and infiltration process. These processes are used for fabrication of nanomaterials on electrolyte, anode and cathode respectively. In this research, the methods are discussed using one nanomaterial for each fabrication method. The nanomaterial used for photolithography is YSZ (yttrium stabilized zirconia) [1, 2], for sintering is NiO/YSZ [3, 5] and for infiltration is metal salt nitrate [4]. These materials would increase the power output and performance of SOFCs. Different nanomaterials can also be used other than the mentioned, for improving the performance. The long term goal of the research is to help the researchers to understand the impact of use of nanomaterials in SOFCs.

‘ Expected solution or anticipated results of your research:

The results of this research will be shared in a form of paper, power point and poster presentation. The results would primarily include the schematic diagrams of the fabrication methods. They would also include the preparation methods for a particular nanomaterial used in the fabrication. The results for nanomaterial in electrolyte would include tables and graphs related to performance of SOFCs with respect to crystalline sizes, temperature, durability and cell voltage. For anode, the results would include performance of fuel cells with respect to temperature and cell voltage. The results for cathode would include performance with respect to temperature. In summary, the performance and durability of SOFCs are expected to increase with addition of nanomaterials.

‘ Timetable for completion:

February 28, 2015 ‘ Literature review and start research from the reference papers.

March 13, 2015 ‘ Proposal for Final Project.

March 27, 2015 ‘ Progress Report for Final Project.

April 17, 2015 ‘ Present Results in Poster Presentation.

April 24, 2015 ‘ Submit Final Paper.

‘ Your qualifications:

I am pursuing Master of Science in Electrical and Computer Engineering. I am writing this research paper as a part of curriculum for TCM 460.

‘ Limitations, discussion, conclusion:

In this research paper, we will see how nanomaterials can be used by using fabrication methods for electrolyte, anode and cathode of SOFCs. Introducing these nanomaterials will increase performance and durability of SOFCs. But, there are some limitations of this research. We are going to see only limited number of fabrication methods for integration of nanomaterials in fuel cells and only one application of the nanomaterial used for electrolyte, anode and cathode. There might be several other fabrication methods and nanomaterials which are not covered in this research paper. Degradation of performance after a certain number of working hours of SOFCs should also be considered while doing future work. [6]

Situated learning: essay help online free

Situated learning is a type of learning that allows individual learners to learn through socializing with other people, or with knowledgeable people or through observing and imitating real activities in real life situations. The above mentioned practice builds on participation and observation in activity.

Situated learning is based on practical activities whereby learners gain beneficial knowledge that they ought to get from schools. In the past years, learners were taught things that were not really useful to them in their everyday life. Learners need to learn or acquire skills or knowledge that are relevant to their lives, and that might be related to the career that they are going to choose in the near future.

Situated learning declared that thinking, learning and doings cannot be separated from the practical and social situations in which they occur. They work in harmony

When the teacher allows learners to have an opportunity to participate, demonstrate and interact their own thoughts, this will build their cognition abilities. Learners will acquire specific skills by observing, visualize, hear and listen by having someone to imitate or follow.

In situated learning, learner’s works through participating in a particular activity of a certain community. Participation involves joining in with the community or group of people who are performing that activity. For example, if a learner wants to know to design clothes or wants to become a fashion designer, he will probably join a group of people who design different types of clothes. In this way a learner will gain his designing experience through doing and from there, he will be able to become productive in his life after mastering the designing skills.

Teaching method -Demonstration

The teaching method that I will use in situated learning perspective is demonstration. It is the process of teaching through giving or showing examples, or acting out situations or carries out experiments. Demonstration can be used as a proof or evidence about whatever theory or situation explained to the learners, through a combination of visual evidence (of things that you can really see with your eyes) and associated reasoning.

Demonstration gives learners an opportunity to relate to the presented information individually and reinforce memory storage, because they provide the link between facts and real world implementation of those facts.

Heather (2009) on his education reference article when he explained the demonstration method of teaching stated that: ‘when using the demonstration model in the classroom, the teacher or some other expert on the topic being taught, perform the tasks step-by-step so that the learner will be able to complete the same task independently. After performing the demonstration, the teacher’s role becomes supporting students in their attempts, providing guidance and feedback and offering suggestions for alternative approaches.

Implementing the practice in my teaching, using demonstration method to improve learning

According to the situated learning perspective, people learn through participation and we participate by joining the group of people who are experts or experienced in carrying out a particular activity. To implement the practice of working to bring authentic practice into the classroom, a learner need to be able to do things or carry out tasks appropriately in real life situations . And the teacher or an expert from a certain community of practice will act as a scaffold in this situation, by carrying out demonstrations.

In English language teaching under the speaking domain, I will implement this practice in teaching my learners about how to give (deliver) a speech in public. Firstly I will teach my learners about what is a speech, how people present speeches and what is the layout of a speech, in presenting it as well as in writing, and also about the main components of a speech such as: The speech should be logically written (should have an introduction, body and conclusion) speaker should be relaxed and try to be calm even when he knows that he is nervous, speech should be interesting, the speaker should use the body language correctly. I will also demonstrate to learners by giving them a short speech as an example.

Secondly I will invite an expert from the community of practice, a person who deliver speeches at different occasions to my class. This person will demonstrate to my learners about how people present speeches, so that they can improve their skills. After the expert’s presentation, learners will be given an opportunity to ask questions, I will also ask them questions to check what they have captured from the presentation. Then I will ask them to work in collaboration with each other in groups, to come up with a speech following the layout that I taught them, and then they should choose a presenter from their groups to present the speech to class. After the group’s presentations, they will be given a chance to comment or make suggestions about others presentations.

As the learners become able to perform the task on their own effectively, more tasks are given, until they master the tasks of speeches presentations. Learners will then be given a task to prepare their own speeches, individually. Before presentations, they will be given opportunities to rehearse. Firstly, they will submit their speeches that they wrote down (draft). I will give those comments and suggestions. In the second rehearsal they will present their speeches in class, this will be done with the purpose to increase their fluency in reading, and to remind them of speech presentation strategies such as: use of voice, facial expressions, and use of body language. Then I will ask them to make changes in their speeches where necessary. Finally they will present their speeches again with an expert observing them. The expert would give comments after the presentations. If possible the presentations should be recorded or videotaped.

From the situated learning perspective, learning is a process that does not take place in an individual mind, but it takes place in a situated learning. In the case of situated practice of speech presentation rehearsal, the teacher as an instructor and the learners constructed the changes in participation that were observed as the learners developed skills from peripheral to fuller participation. In these process learners participation was transformed through demonstration and the teacher’s participation complemented the learner’s learning.

Diabetic ketoacidosis

Introduction:

Diabetic ketoacidosis, or DKA, is one of the most serious metabolic disorders seen in both human and veterinary medicine. A severe complication of diabetes mellitus, DKA is characterized by a more concentration of blood sugar, the presence of substances called ketones in the urine, and decreased concentrations of bicarbonate in the blood. Some dogs with DKA will be less affected but the majority will be seriously ill and may have severe complications such as neurological problems due to brain swelling, acute kidney failure, pancreatitis, and anemia. DKA will lead to death in many cases, but aggressive diagnostics and treatment can be life saving.

DKA often develops in diabetes that had previously been unrecognized or untreated. Thus, it is essential to identify diabetes mellitus or the development of additional symptoms in a dog that is known to be diabetic to prevent DKA from occurring.

Clinical Signs:

Clinical signs include weight loss, lethargy, anorexia, and vomiting. Complications may include anemia, electrolyte abnormalities, neurological disorders, and acute renal failure.

Symptoms:

Some of the symptoms related to this disease are as follows;

‘ Increased thirst

‘ Loss of appetite

‘ Frequent urination

‘ Weight loss

‘ Tiredness

‘ Vomiting

Explanation:

In addition to diabetes mellitus, another most serious condition that may develop. Ketones, also called ketone bodies, are used for energy production in most body tissues. They are normally formed when fatty acids are released from fatty tissue and are transported to the liver. The liver then makes ketones from the fatty acids. Excessive production of ketones can occur in uncontrolled diabetes mellitus, and as they accumulate, ketosis, and eventually acidosis, develop. The four major factors that contribute to ketone formation in DKA are

1. insulin deficiency

2. fasting

3. dehydration, and

4. increased levels of “stress” hormones such as epinephrine, cortisol, glucagon, and growth hormone.

DKA is more common in animals with previously undiagnosed diabetes mellitus, but it can also be seen in dogs with established diabetes that are not receiving enough insulin. In these dogs, there may be an associated inflammatory or infectious disease. Other canines may develop conditions associated with insulin resistance such as hypothyroidism or Cushing’s disease. Dogs may be only mildly affected by DKA, or they may be close to death at the time of diagnosis. DKA develops at an unpredictable rate, and some diabetic dogs may be able to live fairly normal lives for several months with no treatment at all. However, once DKA develops, most dogs become seriously ill within one week.

The aggressiveness of treatment depends on how sick the dog is. While dogs with mild DKA may be successfully treated with intravenous fluids and insulin, dogs with severe manifestations of disease will need more significant intervention. Fluid therapy, potassium, bicarbonate, and phosphorus supplementation can be vitally important. Any accompanying disorders must be identified and treated specifically where possible to enhance resolution of DKA.

Complications during DKA treatment are common, and can include the development of hypoglycemia, neurological signs due to brain cell swelling, and severe electrolyte abnormalities. Anemia due to red blood cell breakdown can occur if the serum phosphorus concentration drops too low. Acute kidney failure also is possible.

DKA is one of the most serious metabolic disorders seen in both human and veterinary medicine. Many patients will die from it. However, the majority of patients can pull through a crisis successfully with aggressive diagnostics and treatment.

Diagnosis

The diagnosis of DKA is based on the clinical signs and the presence of elevated serum glucose concentrations and ketones in the urine, and reduced serum bicarbonate concentrations within the blood stream. Mild DKA is present when dogs with high serum glucose concentrations and ketones in the urine appear healthy, or have only mild clinical signs, or have mild decreases in serum bicarbonate concentration. These dogs do not require extremely aggressive treatment, and should be distinguished from dogs with severe DKA. Dogs with severe DKA have high serum glucose concentrations, ketones in the urine, extreme reductions in serum bicarbonate concentration, and often show severe signs of illness.

In addition to the serum glucose concentrations and urinalysis results, other key diagnostic procedures include measurement of venous total carbon dioxide, blood gas evaluation, and analysis of electrolytes and serum kidney values. In addition to a routine urinalysis, a urine culture should be performed on any dog with DKA, as urinary tract infections are very common complicating factors for this condition. A complete blood count, serum liver and pancreatic enzyme measurements, and cholesterol and triglyceride levels should also be obtained. X-rays of the chest and abdomen, and ideally an abdominal ultrasound, should also be used to investigate underlying or associated factors, as well as other abnormalities that might require specific treatment.

Prognosis

The prognosis for DKA is guarded. As many as five to 10 percent of humans with DKA die from this condition. Death rates for dogs may be as high as 30 to 40 percent in some environments.

Transfer:

DKA usually occurs in either dogs with diabetes that has been present but unrecognized and untreated for a long time, or in previously diagnosed diabetic dogs that have become ill with another problem or that are taking inadequate amounts of insulin

Cure:

Relatively healthy dogs with DKA can be treated with potent but regular short-acting crystalline insulin injections to help get the serum glucose levels back under control. It may take a few days for serum glucose and urine ketone levels to fall, but aggressive treatment may not be needed as long as the dog’s condition is basically stable.

Treatment of sick diabetic dogs needs to be more aggressive. Paramount to the treatment of DKA is the gradual replacement of fluid deficits, as well as the maintenance of normal fluid balance. Many dogs will seem substantially better after being treated by intravenous fluids alone. Phosphate supplementation may also be needed, since serum phosphorus concentrations can drop to dangerously low levels during the treatment of DKA leading to serious complications such as a red blood cell breakdown that results in anemia. Bicarbonate is given to help correct acid-base disturbances. Insulin also is vital in the treatment of DKA. In some situations, fluids need to be replaced quickly, while the glucose levels will need gradual adjustment.

Until safer serum glucose concentrations are obtained, most dogs with DKA are treated first with regular crystalline insulin, the most potent and shortest acting form of insulin, which may be given intravenously or on an hourly basis in the muscle. If the dog is not eating on its own, dextrose may be added to the fluids to keep the serum glucose level from dropping too low after insulin is started.

Concurrent illnesses must be identified and treated specifically where possible. Pancreatitis is extremely common in DKA, but there is no specific treatment for this disorder. Bacterial infections need to be identified and treated in a timely manner. Antibiotics usually are given even if a bacterial infection has not been confirmed, due to the problems that infections cause in DKA. Acute kidney failure may also accompany DKA, and needs to be treated aggressively with fluids. Drugs may be needed to stimulate urine production if it appears inadequate.

Complications during treatment of DKA that occur most frequently include the development of hypoglycemia, central nervous system signs, electrolyte abnormalities, and anemia. The best way to prevent these side effects is to aim for gradual correction of the multiple abnormalities associated with DKA. Excessively rapid correction of glucose concentrations and electrolyte abnormalities often leads to brain cell swelling and neurological signs. Electrolyte concentrations need to be monitored very carefully during the treatment of DKA, as frequent adjustments of fluid type and rate, and the amount of potassium supplementation, are often needed. Also, close attention must be paid to the serum phosphorus concentration, as supplementation with phosphorus is often needed to prevent the development of severely low serum phosphorus concentrations and the anemia that can result from this.

Once the dog is stabilized and eating and drinking on its own, longer-acting insulin types can be initiated. In addition, the supportive measures, such as fluid therapy and medications, can be tapered, as long as no other complicating issues surface and improvement continues. Eventually, the animal should be able to go home with an insulin regime designed for at home use, as well as any other treatments necessary to address additional disorders that might be present.

Preventive measures:

There is no specific method for preventing DKA, but careful treatment and monitoring of diabetic dogs is essential. Recognition of the common signs of diabetes mellitus in a dog–increased thirst and urination, increased appetite, and weight loss–also is important so the diagnosis of uncomplicated diabetes mellitus can b

Rhetorical Analysis of Jonathan Swift's 'A Modest Proposal'

A Modest Proposal is a satirical pamphlet that examines the attitude of the rich towards the poor starving children in their society. Jonathan Swift uses a number of rhetorical devices effectively as he highlights his proposal. He uses logical fallacies, metaphors, repetition and parallelism as well as humor, sarcasm and satire tone to highlight these negative attitudes.

Jonathan swift begins by mocking and blaming the mothers of the children by telling them that they should engage or find themselves in working to earn an honest living instead of strolling to beg for alms. He also predicts tough future for these children that when they grow up they will turn to be thieves. This is simply because the parents did not train their children the modest way of life.

Swift uses logical fallacies to make his argument in ‘A Modest Proposal’. His way of argument and thinking is incorrect and lack validity in what is proposing. This is evident in this pamphlet on line 69 to 73, ‘that a young healthy Child well Nursed is at a year Old, a most 71 delicious, nourishing, and wholesome Food, whether Stewed, Roasted, 72 Baked, or Boyled, and I make no doubt that it will equally serve in a Fricasie’. He notes down that a young healthy child is a delicious food to be roasted, stewed and boiled to be served and eaten. Secondly, he has computed twenty thousand children to be reserved for breeding. This dehumanizes the children to be like animals.

Jonathan swift uses emotional appeal in his argument by proposing slaughter houses to be erected or built in suitable places and butchers to be employed to do the work of slaughtering the children. He further exaggerates by saying that the children will be roasted like pigs. Jonathan knows clearly that this proposal will affect many because no person would want his or her child to be butchered. Beyond that, Swift captures the reader’s emotion on line 34 and 35 ‘prevent those voluntary Abortions, and that horrid practice of Women 35 murdering their Bastard Children’. This is a horrific behavior that is being opposed everywhere in this world.

Another rhetorical device that Jonathan Swift use in his work is irony. He says ‘I calculate there may be about two hundred thousand couple whose wives are breeders’ and ‘how this number shall be reared and provided for’. This suggestion is ironic because he compares women to animals. Also, this creates a good argument because human beings do not breed and cannot be reared. He therefore dehumanizes human beings and creates satire in this statement.

Jonathan swift in his scheme of supporting his argument, he is sarcastic that certain body parts of a child are good to eat. He further clarifies that in certain occasion, the body parts will be on demand. He further suggests that good and healthy children will be skinned and the skin will be used to make admirable gloves for ladies and summer boots for gentlemen. This idea is ridiculous to an extent that children will not only be a delicacy, but their body parts will be used to make ornaments. Secondly, he sarcastically suggests option to Ireland to counter its economic problems. Jonathan proposes that if the poor children can be food, this will create a good revenue to the country through exporting the surplus child’s flesh to the rich outside Ireland. Thirdly, Swift computes the selling price of one child to be ten shillings. This is recorded on line 103-105 ‘I believe no gentleman would repine to give Ten Shillings for the Carcass of a good fat child, which, as I have said will make four Dishes of excellent Nutritive Meat’. He proceeds and make fun of the mothers that they will get eight shillings profit to use until they will able produce another child.

Swift applies a sympathetic tone in his proposals, especially at the beginning. In paragraph two, he is requesting for amicable and a permanent solution to help these children from deplorable state they are living. He goes ahead to award anyone who will find cheap and easy method of making these children useful by building a statue in his or her memory. Jonathan’s tone is not constant in his recording of his proposals. He later changes to scary tone as he progresses to give his personal opinions about these children. For instance, he talks of butchering these children to be made delicious food and skinning of the children to make admirable gloves for ladies and summer boots for gentlemen. This tone shocks and creates fear for the reader.

To what extent the psychiatric services can be improved in the special observation ward in a general hospital under Hospital Authority by nurse leader?: college application essay help

What is leadership about?

There are many different people defined leadership in different ways (Heacock, 2013). According to Hickman (1998), leadership aimed to induce the followers to follow and take action in order to complete the specified goals, and it can helped to show the values of the leaders and the followers and also their motivations can be showed. According to Jooste (2004), leadership is more complicated and not simply acted as a role by how to control the followers, and leaders always tried to help and teach the followers to complete the task by step by step such as planning, leading, controlling and organizing. According to Northouse (2009), leaders can have the ability to affect the followers to complete a specified goal. Rogers (2003) stated that leaders can help the followers to become more aware of the uncertainty and possible outcome about the possible changes. The above information showed that a leader must have good leadership skills in order to affect the followers about what is happening and the values to change the current conditions. It can helped the followers to have a more and better understanding of the specified goal, as a result, the followers are more willing to follow and complete the tasks smoothly.

Importance of Leadership

According to the Strategic Service Plan of the Hospital Authority (2009-2012), health care workers and the frontline staff need to increase and enhance their related skills for the raising patient’s service’s needs, the number of patients, the complexity of medical devices and more complicated medical cases. In order to solve the above problems and needs, Hospital Authority has focused on three main different aspects such as including management skills, leadership skills and clinical competence. Hospital Authority emphasis the importance of leadership and put sufficient resources such as overseas training and classroom courses in order to enhance the leadership skills to current ward mangers, nursing officer, advanced practice nurse and future leaders. According to the Hospital Authority Annual Plan (2011-2012), one of the key objectives was ‘Build People First Culture’ and the one of the priority services for 2011-2012 was to ‘Enhance professional competencies and build up effective management and leadership’.

Overview of psychiatric services in Hong Kong

According to the Hospital Authority Mental Health Service Plan for Adults (2010-2015), there was estimated 1 million to 1.7million people were having psychiatric problem in Hong Kong and about 70,000 to 200,000 people were suffering from severe psychiatric problems. And around 40,000 of them with diagnosed schizophrenia and nearly half of them were treated at out-patient setting.

According to Chui, Mui and Cheng et al. (2012), the aim for public psychiatric hospital was to minimize the psychiatric admission rates and wanted to focus on psychiatric community services such as psychiatric out-patient clinic, psychiatric community out-reach team services and the Consultation Liaison team (CLT) in public general hospital for patients who needed to have psychiatric services. Hospital Authority tried to minimize the psychiatric admission rate by implementing some services since the year 2009.

Why I chose the topic to discuss and how it is important to me and others

I chose this topic because I am working in a ward call special observation ward which strongly supported by the Consultation Liaison team (CLT) services in public general hospital for patients who needed to have psychiatric services. Some of the patients are not physically fit for transfer to psychiatric unit, some of them are patients who are transferred back from psychiatric unit for medical problem, some of them are elderly with newly diagnosed dementia with relatives cannot accept the reality even with poor social support. However, many problems such as placement problems, complaint cases, safety problem and long waiting lists for patients who needed to admit to my ward existed.

This paper will start with the introduction and my selected real case scenario in my ward (special observation ward). I will compare different leadership models such as laissez-faire leadership, transactional leadership and transformational leadership. I will discuss the leadership style of my leader in my case scenario. For the discussion part, force field analysis will be used for analysis the data and I will summarize the findings, and also, I will discuss how the situation can be more effectively with how to improve the situation. Reflective summary will be the last part of this paper with my comment.

Context

Case scenario:

I am working in a special observation ward in one of the general hospital under Hospital Authority and this ward aimed to receive patient with unstable emotion or some psychiatric problem, but they are not physically fit for transfer to psychiatric hospital or need some close observation in general hospital. There are only 24 beds available with long waiting list that patients needed a bit long time to be admitted to my ward. There are only 3 APNs, 13 RNs and 5 ward assistants to support the ward within 3 shifts with heavy workload and stress.

My ward manger wanted to improve the quality of service and reduce the long waiting time for admission. She carried out a lot of guidelines and policies for staff to follow. Some of the policies are: 1) Discuss with relatives whom patients got dementia to find a placement to reduce the length of hospital stay, 2) Team in-charge should screen out cases who can be early discharged or transfer to psychiatric hospital. 3) Writing detailed report in each patient’s record to reduce the chance of getting challenge when patient’s service department receives complaints.

One month later, 1) some relatives complained that nurses forcing them to find old age home within a short period of time and forcing patients to discharge even they were not yet prepared well. 2) Doctors felt unpleasant and complained that nurses overriding their decisions about the discharge plan as some newly upgraded nurses does not have sufficient knowledge to screen suitable cases which causing low morale and conflicts. 3) Nurses needed to spend a lot of time in writing patient’s record causing low morale because nurses always need to spend their own time after duty turnover to finished writing detailed patient’s record (usually more than one hour). Also, many junior nurses do not have sufficient knowledge about detailed and special required documentation skills which put the nurse in-charge in a very difficult position.

Analysis

I will compare different leadership styles and analyze the leadership style in this scenario. According to Hickman (1998), transactional leaders will only correct the problems or mistakes once it happened in which will threatens the leader’s management plan and no changes will be made if nothing happened. Hickman (1998) also stated that transactional leaders also avoid development and improvement as they do not have motivation to have any changes. According to Bass (1990), transactional leadership will use rewards or punishments to the followers in order to achieve the goals.

For the transformational leadership, Hickman (1998) stated that transformational leadership will try to motivate the followers to achieve the goals and needs to changes, and the morality of the followers could be higher. Transformational leader need not to use the authority or power to control the followers. According to Bass (1990), transformational leaders will acted himself/herself as role model to the followers in order to obtain trustfulness and loyalty of his followers. Besides, through mentoring and empowering, Transformational leaders also use the mentoring and empowering skills to let the followers to enhance and develop the potential power in order to complete the specified goals.

According to Gill (2011), there are no guidelines and protocols for the followers to follow for the laissez-faire leadership style. The morale of the followers may be high or low as the followers can do whatever they wanted. However, the followers will have their own style of work and therefore more easily to reach the specified goal.

However, both transactional leadership and transformational leadership will set a clear objectives and goals for their followers with clear guideline. As a result, the followers can have a better understanding of what they should do in order to achieve the goals.

The leadership style of my case scenario:

Firstly, in this scenario, the leadership style of my ward manager was autocratic leader style and she was as a transactional leader. She has the greatest powers and the highest position in order to influence all the staff including nurses, doctors and patients with their relatives. According to Bass (1990), transactional leader got the power to make the plan and ask the followers to perform that in order to achieve the specified goals with the authority power.

In this scenario, she used her power to set some guidelines and policies for the staff to follow in order to improve the quality of service and reduce the long waiting time for admission. The advantages were 1) Admission rates to special observation ward were increased from average 10 patients per days after one month time as evidenced by the admission book. 2) Patient’s Services Department sent an e-mail to department head to appreciate the detailed documentation written in patient’s record in order to minimize the investigation time to answer the complaint cases. 3) Discharge rates were increased as many elderly with placement or caring problem were directly discharged to aged home other than home. And also, the cases which were medically fit for transfer to psychiatric unit were screened out earlier. That evidences were showed in the discharge record. Although her goals were achieved, there were disadvantages such as 1) Low morale of the nurses as they have to spend a lot of time in writing documentation by using their own time. 2) Heavy workload for staff for writing detailed documentations together with the routine work. Junior staff may have difficulties in proper and special documentations and senior staff also felt fatigues by doing their own work together with teaching and supervising the junior staff. 3) Poor relationship between nurses and doctors together with the relatives. As they always said that nurses forcing early discharge of patients which leading to increase in complaints.

Discussion

Changes and Force Field Analysis

According to Carney (2000), the basic and essential skills for all nurse leaders are manage, implement and support the changing process in order to ensure the followers to adopt about the changes. If the leaders lack of the quality of leadership skills, the changing process may be not successful. Force Field Analysis (Lewin, 1951) stated that it can help to how and what were the difficulties about the followers encountered. In the year 1951, Force Field Analysis was done in order to assess the followers for how to the implement the Family-Centered Care Program from the original situation. There were some advises given to improve the leadership skills according to the analysis result of the Force Field Analysis.

Force field analysis is a model designed by Lewin on the year 1951. It is useful to determine the effectiveness of the variables included and also helped to develop some strategies to change or by intervene some of the variables. Lewin assumed that both driving and restraining forces will be occurred when there were any changes existed. Driving forces are equal to the forces in which it keeps the changes are going on continuously. And the restraining forces are equal to the forces that occurred to resist the driving forces. According to Baulcomb (2003), equilibrium can be occurred once the leader can be able to decrease the restraining forces and allow reaching to the desired status by increasing the driving forces.

From the analysis, restraining forces were 1) Poor documentation skills of new staff, 2) Poor communication skills between staff and relatives, 3) Heavy workload due to extra jobs such as teaching new staff and extra time for detailed documentations, 4) High stress from staff to choose potential early discharge patient would due to conflicts with doctors and relatives and 5) Increase in the number of complaint cases. And the pushing forces were 1) Reducing waiting time for admission, 2) increase discharge rate, 3) improve documentation skills of staff and 4) increase the quality of services.

What has been successfully done listed in the Force Field Analysis in this scenario?

1) Reduce the waiting time for admission to special observation ward.

2) Increase the discharge rates in special observation ward.

3) Increase documentation skills for some of the nurses in special observation ward. (but not all).

What has been addressed but failed to success at the beginning without changes?

1) Increase documentation skills for some of the nurses in special observation ward.

2) Reduce the rate of complaints (as increasing rate of conflicts about placement issues)

3) Improve the quality of care as the morale of nurses is low and they feel stressful with heavy workload.

After identify the restraining and pushing forces, some changes or solutions can be established in order to eliminate and minimize the negative factors to make improvement. For the stress issues, as many junior nurses needed a lot of time to handle the routine work because lack of experiences, so they needed to stay after duty off to finish the all the tasks including the detailed documentations. Senior nurses also have the responsibilities to supervise the junior nurses in which they also have to leave lately after off duty. Documentation class training can be implemented by some experienced staff to junior staff and some samples of special documentations can be shared for reference. For the discharge issues, there are communication problems and many doctors and relatives would not listen to nurse’s advice and leading to conflicts and complaints. This can helped by holding a meeting with doctors with agreement made before starting the program. And nurses can invite medical social worker and pre-discharge team if difficult to handle to placement problem in order to avoid conflicts and complaints from relatives. As a result, stress and workload can be reduced with the specified goals can also be achieved.

According to Cain and Mittman (2002), there should be promoting and supporting in changes within the health care setting, but should not greatly influence the existing situations. Conner and Patterson (1982) stated that the reason for the failed to changes were due to the lack of commitment for the followers to changes and it was important for the followers to accept and support the changes in order to success any changes.

In this scenario, although changes were necessary for improving the quality of services, but the leader did not provide adequate support, time and training before implemented the policies and protocol. As a result, the followers showed lack of energy and even felt stressful to support the changes.

Conclusion

Different leadership style such as laissez-faire leadership, transactional leadership and transformational leadership were introduced with a case scenario was shared. Force Field Analysis was used to point out the pushing and restraining forces which can help to improve the situation. There is also a discussion part to discuss the case scenario for improvement. A good leader should show the advantages to changes and to minimize the weaknesses. Regular review and support are essential and a good leader should be ready to accept feedback and suggestions.

Part2:

Reflective Summary

I am a Registered Nurse who is working in Medical Department for nearly nine years and rotated to the special observation ward under medical department for about seven years. I am the second highest appointment Registered Nurse working in this ward and always needed to perform the job as ward in-charge and mentors for new comers and student nurses. As my ward manager decided to implement the guidelines and protocols as stated in Part 1, the workload was increased and I felt very stress as I needed to choose some potential early discharge cases and presented to my ward manager and doctor-in-charge. Also, I needed to spend a lot of time to explain the importance of placement issues to the relative with caring problem during visiting hour with half of my colleagues went out for dinner time and some of them will scolded nurses for forcing patients to aged home. For the potential complaint cases, nurses involved needed to write detailed documentations in patient’s record in which sometimes involves three pages of papers to write. All of the nurses feel fatigue and stress about the new guidelines.

If I am my ward manager, I will choose to be a transformational leader. It is because transactional leadership only focuses on the goals achievement with punishment and rewards were made for the followers. Although Outhwaite (2003) stated that the transactional leaders must have their own abilities to achieve a common goal and the routine job should be done sufficiently. The followers have sufficient instructions from the transactional leader to ensure the work having done successfully and effectively. However, the moralities of the staff under transactional leadership were low as they will always tried to avoid punishment by following the standard guidelines to achieve the specified goals.

I think that transformational leadership is more suitable in nursing field because the followers can have more opportunities and freedom to involve in the decision making process. The transformational leader can allow the followers to implement some tasks with their specified abilities. It provides opportunities for the followers to learn the leadership skills and knowledge. The relationship between the leader and the followers will be better for transformational leadership.

Transformational leader always try to emphasis changes and encourage having commitments. Moreover, I believed that transformational leaders will spend more time to teach and provide coaching to the followers, as a result, the followers should be more satisfied and happier. Transformational leader will also provide the followers some training and for further development of the specified areas and helps to develop the strengths of the followers.

As all of the public hospitals are facing the problem of manpower insufficient already and the turnover rates are high due to heavy workload and poor working environment. It is not practical to implement some new guidelines without sufficient support to increase the workload of staff. If I am the leader, I will have sufficient information and suggestions before implement a new guideline or policy and I will provide adequate professional training and support to the staff in order to allow them to develop their strengths. I will also listen and allow staff to provide suggestions or advises for improvement and changes because the morality will be higher if the staff having the chance for involvement.

To improve my leadership skill, I will use a reflective diary to written down some special events which are happened in my ward together with the advantages and the disadvantages of the leadership style of my ward manager in order to have further improvement or what is good for learning. It can help me to summarize all the events and make an evaluation to be a role model before I can promote to Advanced Practice Nurse. Also, I can seek the approval of my ward manager to set some projects such as the topic ‘Documentation’ by using transformational leadership style to test the effect of performing projects and it can helped to observe the response of the followers for me for further development.

I will set a three to twelve months leadership developmental plan in special observation ward -by SMART. The SMART Objectives of the project as set below:

1) To implement a documentation training course for all of the nurses working in the special observation ward to enhance the documentation skills and special documentation style especially for the patients in special observation ward. For 100% of nurses working in special observation ward have the chance to join the training course within six months time.

2) To implement a communication skills training course for all of the nurses working in the special observation ward to enhance the communication skills especially for the patients and their relatives in special observation ward. For 100% of nurses working in special observation ward have the chance to join the training course within twelve months time.

3) To increase the morale of the colleagues in my ward by receiving feedback of the new guideline to allow their involvement to improve the services within three months time.

Manhole Rehabilitation

Manholes are underground loads assembled to give man- entry access for the maintenance of sewer and channel lines, water valves, metering gear, and so forth. They are liable to disintegration from ground water interruption, corrosion from fluids and gases and general decay from age.

With the introduction of successful manhole repair advancements, the imperfections and issues in sewer vents are getting into attention. During the pipelining process, the leaks are redirected to the manhole creation. It leads to the big infrastructure damage. So, it needs to be fixed in the early stage to lessen the hassle.

Manhole rehabilitation has become common. It costs less to rehab manholes that to replace them. It has become an important approach in the trenchless sector. Our keyword city state experts assist you in manhole inflow issues, restoring the structures etc. Manhole Rehabilitation is a service focused at repairing the damaged manholes rather than replacing them. Handling the manhole issue on your own is impossible. You need to call a reputable full-time servicing company to involve in keyword city staterather than replacement at competitive prices.

At Metro inspection, we offer major manhole rehabilitation services:

‘ Epoxy manhole rehabilitation

‘ Chemical grouting

‘ Tunnel and cementious Rehabilitation

‘ Manhole plugging solutions

‘ Manhole vacuum testing

‘ Fiberglass insertion

‘ Mortar Sealing

‘ Manhole casting rehabilitation

‘ Manhole chimney sealing

A manhole basically is the biggest source of infiltration in the sewer system. It is the manhole rehabilitation procedure which can stop the infiltration and stop the corrosion. Latest technologies are used by our keyword city state specialists to restore the manhole structural integrity. They indulge in proper inspection options and then repair the infiltration sources to rehabilitate the manhole perfectly at the best costs. Our well-trained contractors use timesaving latest trenchless methods to respond quickly. A lot of benefits are associated with our manhole rehabilitation services. It is beneficial to repair the existing manhole without involving any extra maintenance cost.

No matter whether you need commercial sewer services or residential services related to keyword in city state, our contractors are here to get the job done quickly within your budget. With the advanced rehabilitation methods, we are in a position to offer you the best manhole solutions at reduced cost. If you are looking for a reputable service company in city state, you can contact us. Our keyword city state engineers will offer you the accurate services for manhole repair. We perform complete manholes’ repair instead of replacing them. Metro Inspection Inc has capability to find the manholes even if they are hidden. We use modern technology to rehabilitate the existing manholes in an effective manner. We are full-time servicing company to meet your requirements for municipal and industrial sewer line services. Our experts have experience to deal with residential as well as commercial keyword city state services.

We focus on fixing the following issues:

‘ Street restoration costs and problems with poor compaction

‘ Noise and the dragging of dirt into businesses and homes

‘ System Capabilities

‘ Structural restoration of deteriorated manholes

‘ Standard concentric manholes

‘ Eccentric cone sections

‘ Traffic tie-ups and detours

‘ Disruption of service to the community

‘ Disruption of other utilities in the path of excavation

There are a lot of methods used by the companies to fix the existing manholes. But we offer solutions using the latest patented technology i.e. Perma-Liner. Our perma-liner is uniquely designed to fit all the manholes to tackle with the problem in a perfect manner.

As our contractors have years of experience, they are capable of addressing the infiltration issues in manholes and ready to work within your affordability. We are expertise in repairing your old manholes using advanced repair technology. It will enhance the manhole’s life and add worth to manhole structure.

In order to get the trouble free manhole rehabilitation solutions, lessen your burden by availing our trouble-free services. As a reputable and reliable servicing firm, we serve contractors, homeowners, business owners and government across city state. Our team is trust-worthy and committed to deliver best solutions for manhole fixation. We help you to improvement the age of your infrastructure.

What you think? Is manhole rehabilitation an easy task? It is often a much complex task and for this, you need a genuine and friendly team to tackle with the manhole problems.

WHY METRO INSPECTION SERVICES’? Pocket friendly solutions

‘ Quick response

‘ Highly experienced staff

‘ Reliable solutions

‘ Low insurance costs

‘ Environment friendly technology

Your manhole repair services will always be tailored exactly to your specific situation. We’ll take into account your substrate condition as well as application conditions, including dust, water, and small spaces. Materials can also be suited for your needs:

‘ Adhesion

‘ Abrasion

‘ Chemical resistance

Choose us for long lasting manhole rehabilitation. Our skilled keyword city state technicians will assist you with the quality services to solve your problem in a hassle free manner. Metro Inspection offers pipeline inspection, trenchless sewer repair, sewer cleaning, drain cleaning, hydro excavation, manhole rehabilitation etc through knowledgeable experts. Our company uses the top quality repair products to perform the manhole repair. Our main emphasis is on the use of “no-dig” methods that are now in demand by majority of customers.

At Metro Inspection, we understand your need and offer flexible services accordingly. We are available at your doorstep even in any emergency. Our fully insured firm is capable of handling the manhole defects without involving any extra costs. We strive hard to make our clients satisfied by proving the right solutions at the right time. Our customers’ satisfaction is our top priority. So, we work according to the requirements and budget of our clients to keep them happy.

What we do’

Our comprehensive keyword in city state experts are user friendly, environment friendly and cost friendly. Our skilled contractors first, locate the manhole for cracking and corrosion repair. Then, we use our patented latest manhole repair methods to improve the life of the existing manhole. We even fix the small manhole problems so as to stop them to turn to big ones.

It’s the right time to avail the manhole repair services at affordable cost to save time and trouble!

Decentralisation: college essay help online

Over the past two decades a wave of decentralization to the local political bodies has been noticed all over the world. (Martinez-Vazquez, May 2007, p. 1) These worldwide trend towards decentralization is welcomed by the academicians and experts as a positive sign for democratic transformation and the process can be perceived in two fundamental observations,:’First, decentralization is most often associated with an increase in local autonomy. Second, the connotations and values attached to decentralization and local autonomy are almost exclusively positive.’ (Beer-T??th, 2009, p. 29)However, it is observed in most of the cases that political or administrative transfers of power were not followed by proper empowerment in fiscal affairs. Low fiscal autonomy has been a major policy problem in the decentralization process at local level both in developed and developing nations. Central control and supervision of local affairs also found to be a major obstacle in the trends of governing local governments around the world. Lack of fiscal autonomy is closely related to ensuring accountability and transparency for the local government bodies. For better governance at local level, it is urged that more emphasis should be given to local level fiscal decentralization so that local governments can have a certain level of financial resources to organize their internal affairs and ensure peoples empowerment at local level .This paper is designed to examine the major issues and concerns related to fiscal autonomy, accountability mechanism and decentralization at local level around the world and connect those issues to broder governance paradigm and find out the major challenges to advance democratic practices at local level. The paper will try to give an overall view of the trends of local level governance practices in both developed and developing world and will try to bring under a comparative lens of all the concurrent issues and challenges related to local level governances financing.

A) Financing by Central Government: central control and the question of autonomy

Dependence on central government in fiscal affairs is a worldwide trend in local government financing. Intergovernmental transfers are the important sources of local government financing around the world. It is thought that these government transfers have political dimensions as most of such kinds of transfers are designed from center with political motives. Therefore, it is important to assess the role of center government in financing local bodies around the world. In this part, the global trends of intergovernmental transfers, imbalance between center and local and its political dimensions around the world will be discussed and analyzed with the purpose to comprehend the magnitude of central government transfers to local government around the world.

1. Intergovernmental transfers for financing local governments

Intergovernmental transfers are the main source of local governmental finance around the world.

The transfers are especially important for developing nations because local government taxing powers are very limited in most of the developing world. In fact, many different types of transfers are in use around the world and it is difficult to settle on a best practice (Roy, 2008, p. 30). It is urged to reduce the flow of government grants to local governments and increase the scope of local taxation and resource mobilization. In fact, the share of government grants in local government budgeting is recognized as an indicator for financial autonomy at local level (Daniel Bergvall & Merk, 2006, p. 4) and bridging the gap between revenues and expenditures remain the main challenge for the effective execution of decentralization and democratic transformation. However, there is yet any consensus whether those transfers promote efficiency or misallocate resources at local level. In one view, lack of adequate resource transfers to local governments creates difficulties to finance their expenditure responsibilities, while in other view; overdependence of central grants can undermine local accountability. According to one analyst over-dependence can created perverse incentives at the local level to misallocate public resources in federal system. (Khemani, July 24,2001, pp. 5-6)

2. Political dimension of financial decentralization

Local autonomy is a fundamental base for making democracy work, and is often referred to as a ‘school in democracy.’ (Shimizutani, 2010, p. 99) People’s participation should come from the roots and decentralized and autonomous local body can equip the people at local level to promote democratic procedures .Nevertheless, it can backfire from its own strength. Decentralization which is believed to break down the asymmetric relationship of clientelism at local level can create a new type of clientical political practices in real world (Garc??a-Guadilla & P??rez, 2002, p. 104). Indeed, in many cases, decentralization simply empowers local elites to capture a larger share of public resources, often at the expense of the poor (Johnson, Deshingkar, & Start, 2005, p. 937). Recentralization process also can be noticed for political reasons. Nicholas Awortwi examines the administrative reform policy of Ghana and Uganda; and showed that recentralization and further weakening of LGs are likely to continue in both countries because the initial path that was created benefited politicians and bureaucrats and they are committed to staying on that course. (Awortwi, 2011) Political calculation is always a major factor in any policy setting. Even, in Developed world, like UK, political trend of targeting local government fund can be identified. (John & Ward, 2001).Central-periphery financial relations in different countries always evolved differently in different political perspective. Moreover, developing countries often reach their decision about intergovernmental transfers for political reasons as well. (Roy, 2008, p. 33) Bahl Roy explained the politics behind the intergovernmental transfers in three categories:

i. The Central authority likes to provide local governments with intergovernmental transfers that carry stringent conditions to bypass the decentralization demand.

ii. A reason for advocating intergovernmental transfers by central government is the goal of enforcing uniformity in the provision of public services.

iii. A transfer system may be put in place as part of a political strategy to hold open the option of offloading the budget deficit on to subnational governments (for example, underfunding a grant program). (Roy, 2008, pp. 33-34)

Thought, it is thought that there are political calculations behind the sanctions of government grants, it is the dominating trends in both developed and developing world and the trend of Intergovernmental transfers is likely to continue.

3) Financial Gap between local and central governance

Countries, both developed and developing, transfer funds to equip the local governments for providing services and generate development at local level .However; Developing and transition countries are characterized by wide disparities among regions in economic well-being. (Roy, 2008, p. 31) Nevertheless, vertical imbalance existed between centre and periphery is a common symptom of fiscal imbalance of developing nations which is believed to treat with taking policies of financial empowerment. An analyst emphasized the solution to adopt equalization measures of inter-regional differences in financial capacities and it can be accomplished by providing intergovernmental transfers. (Roy, 2008, p. 31) In a study of 9 major developed and developing countries , it is suggested to adopt more equalization formula to face the disparity problem. (Ma, 1997)Roy Bahl identified a reason behind transfers (subnational) is to offset externalities so that local governments can make their own decision and may underspend on services where there are substantial external benefits (Roy., 2000, p. 3). It is also argued by Roy that reducing administrative cost of taxing may be another cause to collect tax by central authority and then the central government transfers grants to local level. (Roy., 2000, p. 4)

In OECD countries 34.4 percents of revenues come from transfers. (Shah & Shah, 2006, p. 37). In a study of OECD countries , a growing trend of widening gap between sub-national tax and expenditure shares in the last twenty years is identified (Daniel Bergvall & Merk, 2006, p. 5)which caused a higher dependence of sub-national governments on grants. So fiscal decentralization in OECD countries, in fact, shrink the scope of fiscal autonomy as sub-national governments have become more dependent on central governments for their resources. Intergovernmental transfer from centre to state governments in USA constitutes a larger part of state budgeting. These transfers accounted for about 38% of all local government revenues, ranging from a low of 19.2% in Hawaii to a high of 70.2% in Vermont (Wildasin, 2009, p. 7).In developing countries, the dependence of fiscal transfers is more instrumental. Intergovernmental fiscal transfers finance about 60 percent of subnational expenditures in developing and transition economies. (Shah. A. , 2007, p. 1) In a study of World Bank on some selected countries, it is found that the average funding of local governments by government transfer is 50.9 percent. (Shah & Shah, 2006, p. 37)It is found that the fiscal transfers are much larger than average in Uganda (85.4 percent), Poland (76.0 percent), China (67.0 percent), Brazil (65.4 percent), and Indonesia (62.0 percent). (Shah & Shah, 2006, p. 37) It is also noticed in AND report that significant vertical fiscal imbalances prevails in Bangladesh, India, and Pakistan, and at the local level in the Philippines, the PRC, and Viet Nam. (Martinez-Vasquez, 2011, p. 5)In case of revenue autonomy, lower autonomy can be found as a common practice in many countries. Revenue autonomy is found low outside Japan and the Republic of Korea, and much less in Indonesia and the Philippines. However, autonomy at provincial level can be traced in India, Pakistan, and the PRC. (Martinez-Vasquez, 2011, p. 5)

4) Fiscal autonomy and the question of public service delivery of Local Government

Decentralization is recognized as a way to bring people closer to government services and also as a feedback mechanism to response the local people needs. This move reflects public preferences for more democratic and participatory forms of government in order to improve the level of public services to respond to the needs of users of those services. (Sayuri, 2005) Though the notion of fiscal autonomy is central in fiscal decentralization literature; the idea of fiscal autonomy did not get proper academic investigation at the beginning. The local autonomy concept can be traced from Tibeout model of 1956 as an arrangement for local competition. Probably the earliest attempt was from Clark who described autonomy as a relative concept with two specific powers: power of initiations and power of immunity. (Beer-T??th, 2009, p. 31) Early theorization was mostly involved to deal with the question of the capacity of local government following Clark and then later literatures incorporate other issues including local government autonomy. The European Charter of Local Self-Government taken by the Council of Europe in 1985 described local self-government (i.e. local autonomy) along the double characteristics of right and ability to manage local public affairs. (Beer-T??th, 2009, p. 36) Therefore, it is obvious that fiscal empowerment is an important part of decentralization and without it, the goal of effectively providing services from local level cannot be achieved.

Though a wave of decentralization is recorded around the globe in the last two decades, the decentralization of local bodies did not supported by proper autonomy in fiscal affairs. Low expenditure autonomy due to the central supervision lacks the local government to introduce or keep services by their own. A study on the local government finance of some OECD countries found that the most common way of transferring resources from central to subnational government is through earmark grants and these grants are used for the purpose of financing and subdivision of services and for equalization of tax or service capacity (Daniel Bergvall & Merk, 2006) The study affirmed that non-earmark grant can be more effective instrument for financial purposes. On the other hand, a study on fiscal decentralization of Asian countries found that many Asian countries exhibits the highest level of decentralization in the world in term of the share of subnational government in total expenditures. (Martinez-Vasquez, 2011, p. 3) It is showed in the report that 70% of total expenditure is allocated at subnational level in PRC, 66% in India, 60% in japan,45% both in republic of Korea and Vietnam. However, this data in many cases failed to interpret the actual level of autonomy at local level. Throughout the entire region, heavy reliance and dependence on transfers and revenue sharing can be found. Lower tier governments in most Indian states have a very little expenditure autonomy from their state governments. (Martinez-Vasquez, 2011, p. 3) It is also noticed that central government in many countries involved in local functions as well. Expenditure autonomy (percentage of own expenditure under effective control of sub-national governments), is on average higher (74% for all but 96% in Croatia, and 7% in Albania) in transition economies than developing countries (58% for all but 95% for Dominican Republic and 23% for South Africa. (Shah. A. , 2004, p. 17)

B) Financing by own: three major sources for local financing

There are different means of financing local needs by own resources of local governments. Three sources from which local level bodies mostly rely on are local level taxation, local government Borrowing and Public private partnership which have significant importance to enforce local financing.

1. Local level Taxation: empowered by own sources

Taxes are the most important sources of the local government revenues. Financial decentralization process provides the Local governments institutions with the necessary authority to change tax rate, initiate new tax and enhance the scope of the tax. It is thought that fiscal decentralization will increase taxation net and a greater share of GDP will be reached by tax system. Indeed, it is believed that increased subnational revenue mobilization will reduce the need for intergovernmental transfers from central revenues (Bird & Bahl, 2008, p. 4).

Significant tax assignment to subnational governments has become prevalent in developed countries (Bird R. , November 2010, p. 1). Bird & Bahl examines different country cases and identified the trend of developed world. (Bird & Bahl, 2008, p. 6)US State governments and Canadian provinces have almost complete autonomy in choosing any tax base, so long as there is no interference with interstate commerce. In Denmark and Sweden, local taxes account for nearly one-half of local government spending. Revenues from subnational government taxes in Switzerland are greater in amount than revenues received from grants. Though, Japan had a conservative tax policy which allow little to local government in term of taxing capabilities but the country is planning to introduce new intergovernmental reform to shift taxing power significantly to local governments (Bird & Bahl, 2008, p. 6) However, it is noticed that in most developing countries, central governments have been reluctant to reform the taxing system for subnational governments. (Bird & Bahl, 2008, p. 7) The subnational tax share in total taxes in developing countries is only about 10 percent while it is 20 percent in industrialized countries. These figures have changed little in the last 30 years. (Bird & Bahl, 2008, p. 7) Local governments in countries like Cambodia, China and Vietnam get less than 5 percent of their total revenues from their own sources (Talierciao, 2005, pp. 107-128) On the other hand, in a few developing countries, like the Philippines, Brazil, and Colombia, a third or more of subnational government expenditure is met up by own sources (Bird & Bahl, 2008, p. 7)

It is thought that increased fiscal autonomy would improve the efficiency and responsiveness of the public sector governance. (Fjeldstad & Semboja, 2000, p. 28) However, strengthening autonomy by providing more taxation power to local government can cause greater mismanagement and corruption in local authorities. In developing country like Tanzania where Local taxes represent less than 6 per cent of total national tax revenues (Fjeldstad & Semboja, 2000, p. 7), it is strongly recommended to restructure the revenue system combined with capacity building and improved integrity mechanism. In case of India, it is noticed that decentralization of fiscal power to local Panchayat Body eventually decreases the volume of taxes and also shrink the tax base. The chiefs of the Panchayats always count the elections factors which is one of the cause of declining taxes. So it is recommended to undertake more accountability measures and provide intensives in tax collection of the Panchayat. (Jha, Kang, & Nagarajan, 2011) Therefore, in case of tax autonomy, it can be assumed that capacity building and ensuring accountability and transparency are crucial while transferring power to local authority.

A major part of local revenues is collected from property taxes around the world. OECD countries raise 54 percent of local revenues from property taxes, 23 percent from personal income taxes, 14 percent from corporate taxes, and 9 percent from other taxes. (Shah & Shah, 2006, pp. 37-39) Therefore, it is apparent that local governments in OECD countries depend more on property and income taxes than other sources. But developing word lacks proper tax autonomy because of the unwilling political elites and capacity problems. For all developing countries, revenues from property taxes constitute only 0.5percent of GDP which is about 2 percent (1 to 3 percent) of GDP in industrial countries. (Shah & Shah, 2006, p. 39) Therefore, property taxes may represent significant untapped potential for funding local affairs in developing countries.

2. Local government borrowing: Challenges and promises

Unavailability of government grants and Lack of local funding sometimes compelled local governments to take loans from public and private sectors. Local government bodies usually collects loans from banking sector ( both national and international development program loans) or issued bonds. (Bucic & others, 2011, p. 2) Developments projects are designed with such type of borrowing options for emergency situation. Large infrastructure deficiencies in developing countries call for significant access to borrowing by local governments. (Shah & Shah, 2006, p. 40) Local access to credit requires well-functioning financial markets and creditworthy local governments; however, in most of the local governments in developing countries lacks both. (Shah & Shah, 2006, p. 40) Heavy reliance on borrowing also can jeopardize macroeconomic stabilization. For example, perversely structured intergovernmental systems destabilized the economy of Argentina in the late 1990s. (Yilmaz, Beris, & Serrano-Berthet, 2008, p. 281) After the 90es Japan took some initiatives to empower local governments by issuing bonds with guarantees, uniform issuing conditions, and secured finance from public funds to meet up the gap between revenues and expenditure. But it was proven ineffective and unproductive in most of the cases and it is suggested to adopt accrual-based accounting system instead of cash-based accounting system. (Sayuri, 2005)Most countries follow the policy to limit, control, or even prohibit the issuance of debt by local governments. A World Bank study report found none of the local governments of ten country’s health and education sectors that are surveyed in the study was given full discretion to borrow. However, it is noticed in the study that local governments in Burkina Faso, Ethiopia, Kerala, Philippines, Rwanda, Tanzania and Uganda, have partial authority over borrowing. (Bank, 2009, p. 55)

3. Public Private Partnership (PPP): A New Window of local Financing

Public-Private Partnerships (PPP’s) have been hailed as the latest institutional form of co-operation between the public sector and the private sector. (Greve & Ejersbo, 2002, p. 1) If local government enjoys necessary autonomy from central government, PPP can be used as effective instrument to respond to the local demand without looking funding from central government. For example, Mandaluyung city of the Philippines build a new Market place using the PPP formula which had lacking of fund at that time. But PPP has some instrumental risks concerning the possibilities of misuse of power, corruption and transparency. The Danish local government of Farum in Denmark was considered as one of the success story of PPP at local level governance in 90s.But later, a huge scandal of corruption and irregularities were erupted in the organization in 2002. Clash between central government and lack of democratic accountability mechanism were thought to be responsible for the failure of the local governance. (Greve & Ejersbo, 2002) In an article on the PPP taken by Morogoro municipality in Tanzania, Lameck analyzed various PPP project by the city and urged that there should be a framework of rule and regulation to undertake such practice; otherwise Government will lose control over the whole procedure. (Lameck, 2009).As private organizations are more profit oriented, the local governments should be more careful about the accountability and responsiveness of the project. John Hood and N Mcgarvey showed that the local Government PPP initiatives taken by Labour Government in Scotland lack proper risk management procedures which might jeopardize the whole arrangement. (Hood & Mcgarvey, 2002)

C) Corruption, accountability and Fiscal decentralization

Decentralization of fiscal affairs is thought to be a panacea for corruption and to promote accountability and transparency at local level. However, it has some significant policy risks as it can open up new windows of nepotism, corruption and mismanagement.

1. Does fiscal decentralization combat corruptions?

It is assumed that fiscal devolution to local governments creates space to bring the services to the people and installs a way of trustworthiness which can decrease the culture of corruption practice. A flow of increasing intergovernmental and political competition installed by decentralization can reduce rent seeking and monopolistic behavior and improve service deliveries. (Fisman & Gatti, 2002) But there is huge debate on the effectiveness of fiscal reforms to bring accountability and transparency by installing decentralized structure. Some researchers have an optimistic assessment on the effect of decentralization of fiscal affairs on corruption while some other explained decentralization as a way of corruption. Treisman argued that decentralized government creates many levels of governments and a more complex system of governance reduce accountability and increase corruption. (Treisman, 2000) Prud’homme stated that there is more opportunity for corruption at local level as local bureaucrats have more powers to execute and they are influenced by the local interest groups. (Prud’homme, 1995) Goldsmith argued that it is easy to hide corruption in local level than center level. (Goldsmith, 1999) But most other studies found a negative relationship between the two variables. An exclusive study on 24 countries in the time frame of 1995-2007 found that fiscal decentralization has a positive impact in reducing corruption. (Padovano, Fiorino, & Galli, 2011) In another rigorous study of 182 countries, it is founded that decentralization and corruption has a negative relationship. (Ivanyna & Shah, 2010)

In Malawi, a move to decentralize the local government body in 2000 following the act of 1998 opened up a huge window of corruption in the country. (Tambulasi & Kayuni, 2007) After the fiscal reform and devolution of fiscal power to local bodies, the new-patrimonial leadership became reinforced exploiting the opportunities which eventually broke down the accountability system. (Tambulasi & Kayuni, 2007) Tambulasi in another article expressed the view that adaptation of new public management strategy is the policy problem of the whole process and suggested to take public governance reform model with more participation and transparency. (Tambulasi R. I., 2009) Some argues that using bribery as an indicator of corruption is problematic and other social and economic indicators should be examined. (Bardhan & Mookherjee, 2005) He summarized that the relation between corruption and decentralization is very complex as a lot of variable is involved in the process and single one approach is not enough to unveil the underlying relationship. He also mentioned that the problem of capture and lack of accountabilities are the major obstacles in developing countries. Robert Klitgaard (1988) explained the principle’agent theory and argued that monopoly and discretion can exacerbate corruption while accountability has a reducing effect. (Witz, 2011, p. 5)A report on the corruption of Local governments in Latin American countries also suggested taking legal and institutional reforms to combat the problem. (Bliss & Deshazo, 2009) The Report emphasizes on the availabilities of information and urged for performance management efforts to be undertaken. (Bliss & Deshazo, 2009, pp. 14-15) Nina Witz in a paper showed that accountability in local level water governments is relatively higher than central government in Sweden and described decentralization as an antidote of corruption. (Witz, 2011)Arikan also found evidence that decentralization can lower the level of corruption. (Arikan, 2004) .Furthermore, fiscal decentralization believed to have positive impact on the citizen behaviors regarding the corruption issues and can boost social capital by increasing trust among the citizens to the government officials and bring the government closer to the people. Oguzhan Dincer found a positive correlation between fiscal decentralization and trust using data from US states. (Dincer, 2010) Following the seminal work of Putnam, a good number of empirical studies found a positive impact of social capital on the economic growth of a country and it is suggested to follow fiscal decentralization as a policy to increase social capital and trust in both developing and developed countries. (Dincer, 2010, p. 189). In case of Zambezia of Mozambique, Akiko Abe found that Social trust (one dimension of social capital) was formed in a shorter period of time than Putnam has outlined. (Abe, 2009, p. 77)

2. Risks of Local fiscal Autonomy and accountability mechanism

Financial devolution of power is thought to empower the local leadership and provides accountability and transparency to the whole settings. However, providing financial autonomy at local level has some potential risks. Fiscal decentralization depends on the ability of local governments to manage revenues and expenditures effectively and requires strong institutions for financial accountability. (Yilmaz, Beris, & Serrano-Berthet, 2008, p. 23) Financial accountability seeks transparency in the management of public funds. It also requires that governments manage finances prudently and ensure integrity in their financial reporting, control, budgeting and performance systems. (Sahgal & Chakrapani., 2000., p. 3) In an article, Serdar Yilmaz, Yakup Beris and Rodrigo Serrano-Berthet explained two methods of downward accountability (Public accountability approaches and Social accountability approaches) of local financial organization along with other methods .They examined different experiences of financial autonomy and accountability from different countries and identified different issues arising from the lack of internal controls. (Yilmaz, Beris, & Serrano-Berthet, 2008) They showed that many nations impose central control over local governments as a policy to restructure subnational relations observing the capacity problem of local governments around the world. They suggest not taking only upward accountability mechanism which may limit local government autonomy in decision-making and service delivery negating the intended empowering of local governments. (Yilmaz, Beris, & Serrano-Berthet, 2008, p. 26) Yilmaz and Felicio examined the decentralization and low accountability problems of Angola and urged for a checked and balanced policy to cope with the tendency of abusing of discretion power. (Yilmaz & Felicio, 2009) Though citizen participation is ensured at local level there, Provincial and Municipal administrators did not genuinely embrace the spirit of the citizen councils. It is suggested to incorporate appropriate advocacy efforts to ensure quality participation processes at the municipal and provincial levels and emphasis on strengthening civil society’s skills that will incrementally increase accountabilities in public expenditure management activities and will ensure proper oversight. (Yilmaz & Felicio, 2009, p. 21)In Ethiopia, it is noticed that progressive features of fiscal decentralization were not followed by political management. A strong upward accountability structure without the accompanying discretion and downward accountability mechanism was the main feature of the system which failed to ensure the accountable nature of organization. (Yilmaz & Venugopal, 2008, pp. 23-24) It is evident from different experiences that a combination of upward and downward accountability arrangement and a participatory nature of governance only can ensure democracy, better management and transparency at local level. Anwar Shah, in an article, urged for judicial accountability measures in developing countries where laws on property rights, corporate legal ownership and control, bankruptcy, and financial accounting and control are not fully developed. (Shah. A. , 2004, p. 34) He also emphasis on traditional channels of accountability such as audit, inspection and control functions should be strengthened, since they tend to be quite weak in transition and developing economies. (Shah. A. , 2004, p. 34)

3. Participatory local budgeting for more accountability and transparency

Budgeting at local level is a significant instrument for the fiscal health of a local body. Traditional municipal budgets which is in fact, focused with incremental line-item budgeting practice, have historically been constructed on giving emphasis on accounting staffs to face the audit requirements and it said by one analyst mentioned that it is aimed to the audited financial statements required to be submitted by municipal authorities after the fiscal year. (Schaeffer & SerdarYilmaz, 2007, p. 8)Over the last two decade, it is observed that different reform measures have been taken incorporated with the traditional budgeting to ensure more transparency and accountabilities. Program budgeting at local level brought different planning and accountability measures differing from the traditional line-item approach in preparing, reviewing, and presenting the budget. In recent changed global world, participatory local budgeting becomes a powerful good governance tool to integrate citizens in government’s matters. Participatory budgeting is considered as a direct-democracy approach to budgeting and by enhancing transparency and accountability participatory budgeting can help reduce government inefficiency and curb clientelism, patronage, and corruption. (Shah, Overview, 2007, p. 1) However, Participatory budgeting has some significant risks. Participatory processes can be captured by interest groups. Such processes can mask the undemocratic, exclusive, or elite nature of public decision making, giving the appearance of broader participation and inclusive governance while using public funds to advance the interests of powerful elites. (Shah, Overview, 2007, pp. 1-2)

4. E-Governance for strengthening decentralization

The potential of e-government in advancing good governance is increasingly being recognized. (Bank., 2004) E-governance is identified as an efficient tool to generate transparency and ensure accountability in government procedures. Moreover, one of the strength of e-governance is that it is cost effective. E-procurement creates a highly competence and transparent environment of procurement and a faster method of getting quotes which can narrow the scope of corruption and also reduce the cost as well. E-procurement can even cut 50 % municipalities public procurement cost. In this backdrop, it is highly recommended to induce electronic methods in government procurement and other administrative procedures for transparency and ensure easy access of the citizens.

World Bank funded some pilot cases in developing world (some state in India) and found a positive result in widely used services, such as issuance of licenses and certificates and collection of payments and taxes (Bank., 2004). One of the strength of e-governance is that it provides transparency which acts as a viable tool against corruption. For example, Karnataka State of India digitalized the transfer system of teachers and it eventually reduced the scope of corruption in the transfer process. (Bank., 2004) In Andra Pradesh of India, the e-governance strive faced lot of difficulties due to manage huge information of complex administration which is related to a vast population. Reengineering and changing work processes across 70 departments in the secretariat have been a challenge even for the country’s largest information technology company, which is implementing the project. (Bank., 2004) Most e-governance project requires huge funding to automation the whole system and also huge population in developing countries are outside the internet facilities. In a report on African prospect to introduce e-governance, it is identified that adequate funding and low rate of literacy and PC penetration rate are the challenges to update the whole system under e-governance. (Kitaw, 2006, p. 8) Another study of six African southern countries (Ghana, Kenya, Tanzania, Namibia, South Africa and Swaziland) examined e-Readiness conditions and suggested to initiate more capacity building measures to strengthen the procedures. (Meyaki, 2010)Digital divide is a big challenge to integrate all the people in a more citizen centric structure of e-governance. Growing mobile networks around the world and also in developing countries can be easily recognized and m-Governance (providing services though mobile phones) can be an option to fight the digital divide. Integrating fiscal measures in local affairs can ensure accountability and transparency at local level as well. Kerala state of India initiated m-governance by launching varies services focusing on the utilization of mobile technologies to deliver citizen services which includes electricity and water services billing, road tax and vehicle registration. (Young, 2009)

Conclusion

Strengthening Local governments by providing more autonomous power in fiscal affairs and ensuring citizen involvement is believed to empower people at local level and can bring changes from root level as local governments only know the needs from grassroots. In this paper a wide range of literatures is examined to recognize the trends and issues concerning fiscal autonomy and financial accountability mechanism at local governments around the world. Most of the local government experiences indicate positive relations between financial decentralization and better governance. In this age of globalization and Information technology revolution, a more global world with localization of governments is emerging. This trend must be supported by financial empowerment of local bodies and accountability mechanism at local level. Access to untapped revenue sources and digitalization of organization procedures has become an important tool to cope with the challenge of globalization and Information technology revolution nowadays. Bangladesh, a developing nation which has a huge population living under local government bodies and the weakness of her local government is depicted as the root cause of her dysfunctioning democracy, can be benefited from the lessons of decentralization around the world and can reevaluate her policy regarding local government and decentralization.

The emerging role of distance bounding protocol in aerospace systems: college essay help near me

Abstract: RFID (Radio Frequency Identification) systems are vulnerable to replay attacks like mafia fraud, distance fraud and terrorist fraud. The distance bounding protocol is designed as a countermeasure against these attacks. These protocols ensure that the tags are in a distant area by measuring the round-trip delays during the rapid challenge response exchange. Distance Bounding protocols are cryptographic protocols which enable verifier to establish the upper bound on the physical distance to the prover. They are based on timing the delay between the sending out a challenge bit and receiving back the corresponding response bits. A timing based response followed by consecutive timing measurement provides more optimistic approach in authenticating the prover.

Index Terms: RFID, Mafia fraud, Distance fraud, Terrorist fraud, Distance Bounding protocol.

I. INTRODUCTION

A famous story of the little girl who played against two Chess Grandmasters’ How was it possible to win one of the games? Annie-Louise played Black against Spassky. White against Fisher. Spassky moved first, and Ann-Louise just copied his move as the first move of her game against Fisher, then copied Fisher’s replay as her own reply to Spassky’s first move, and so on.

This problem exploited by Anne-Louise is known in the cryptographic community as mafia-fraud. Mafia fraud is a man in the middle attack against an authentication protocol where the adversary relays the exchanges between the verifier and prover, making them believe they directly communicate together. The mafia fraud is particularly powerful against the contactless technologies. The most threatening systems are Radio Frequency Identification (RFID) and Near Field Communication (NFC) because the devices answer to any solicitation without explicit agreement of their holder. The vulnerability of these technologies has already been illustrated by several practical attacks [10]. The two attacks related to mafia fraud are distance fraud and terrorist fraud. The distance fraud only involves a malicious prover, who cheats on his distance to the verifier. The terrorist fraud is an exotic variant of the mafia fraud where the prover is malicious and actively helps the adversary to succeed the attack.

Measuring the physical distance between communicating parties is important for communication security. For example, we can imagine a building security system that allows a visitor to open the door to the building only when the visitor has an authorized radio frequency Identification (RFID) tag for entering the building. When authenticating the tag, the security system should also verify the upper-bound distance between the door and the tag to thwart the remote attackers who may desire to open the door from a distance between communicating parties [4].

To solve the above problem, Brands and Chaum have proposed a distance-bounding protocol. In this protocol, a verifier V seeks to authenticate a prover P while measuring the distance d between V and P. For authentication, most of these protocols rely on multi-rounds of single-bit challenge and response, also known as a fast bit exchange phase. They are also lightweight in the sense that they do not require an additional (time and resources consuming) slow phase to terminate the protocol. A timing based response followed by the consecutive timing measurement provides more optimistic approach in authenticating the prover.

II. SYSTEM ARCHITECTURE

By using distance bounding protocols, a device (the verifier) can securely obtain an upper bound on its distance to another device (the prover). The security of distance-bounding protocols was so far mainly evaluated by analyzing their resilience to three types of attacks. For historical reasons, these are known as Distance Fraud, Mafia Fraud and Terrorist Fraud. In Distance Fraud attacks, a sole dishonest prover convinces the verifier that he is at a different distance than he really is. In Mafia Fraud attacks, the prover is honest, but an attacker tries to modify the distance that the verifier establishes by interfering with their communication. In Terrorist Fraud attacks, the dishonest prover colludes with another attacker that is closer to the verifier, to convince the verifier of a wrong distance to the prover. So far, it was assumed that distance bounding protocols that are resilient against these three attack types can be considered secure. In case of hostile attackers, the dishonest prover can pretend to be closer to or further away from the verifier than it actually is by either jumping the gun or sending a response before the request, or pretend to be further away than it is by delaying its response. Hostile attacker could attach its own identity to the prover’s response, and pass off honest verifier’s location as its own [1], [13].

Finally, dishonest provers can conspire to mislead the verifier, one prover lending the other prover its identity so that the second prover can make the first prover look closer than it is. The idea is that the prover first commits to a nonce using a one-way function, the verifier sends a challenge consisting of another nonce, the prover responds with the exclusive-or of its and the verifier’s nonce’s, and then follows up with the authentication information.

Fig 1: System Architecture

METAR is constructed to analyze the Weather report and cloud base height of an airplane. These details or information is passed between the verifier and prover. METAR is Meteorological elements observed at an Airport at a specific time. The verifier uses the time elapsed between sending its nonce and receiving the prover’s rapid response to compute its distance from the prover, and then verifies the authenticated response when it receives it. Through the wireless, verifier raises an authentication query to the prover side. If the prover gives an exact answer to the question means he/she is able to receive the extracted information at the end.

III. RFID TECHNOLOGY OVERVIEW

RFID frequency identification (RFID) technology consists of small inexpensive computational device with wireless communication capabilities. Currently, the main application of RFID technology is in inventory control and supply chain management fields. In these areas, RFID tags are used to tag and track the physical goods. Within this context, RFID can be considered a replacement for barcodes. RFID technology is superior to barcodes in two aspects. First, RFID tags can store information than barcodes [3]. Unlike a barcode, the RFID tag, being a computational device, can be designed to process rather than just store data. Second, barcodes communicate through an optical channel, which require the careful positioning of the reading device with no obstacles in-between [12]. RFID uses a wireless channel for communication, and can be read without line-of-sight, increasing the read efficiency.

The pervasiveness of RFID technology in our everyday lives has led to concerns over these RFID tags pose any security risk. The future applications of RFID make the security of RFID networks and communications even more important than before. The ubiquity of RFID technology has made it an important component in the Internet-of-Things (IoT), a future generation Internet that seeks to mesh the physical world together with the cyber world. RFID is used within the IoT as a means of identifying physical objects [11]. For example, by attaching an RFID tag to medication bottles, we can design an RFID network to monitor whether patients have taken their medications.

IV. DISTANCE BOUNDING PROTOCOL

Verifying the physical location of a device using an authentication protocol is an important security mechanism. Distance Bounding protocol aim to prove the proximity of two devices relative to each other. Distance bounding protocol determines an upper bound for the physical distance between two communicating parties based on the Round-Trip-Time (RTT) of cryptographic challenge response pairs. Brands and Chaum proposed a distance bounding protocol that could be used to verify a device’s proximity cryptographically. This design based on a channel where the prover can reply instantaneously to each single binary digit received from the verifier [1]. The number of challenge ‘response interactions is being determined by a chosen security parameter, Distance bounding protocol not only in the one-to-one proximity identification context but also as building blocks for secure location systems. After correct execution of the distance bounding protocol, the verifier knows that an entity having data is in the trusted network. Distance bounding protocol can be dividing in three phase: the Commitment Phase, the Fast Bit phase and signing phase.

The first DB protocol suitable for resource-constrained devices example: RFID tags. This protocol is considered lightweight in the sense that a single computation of a hash function and a call to a Pseudo Random Number Generator (PRNG) are the most costly operations required for its execution. The simplicity and efficiency of this protocol yield to similar designs for other DB protocols which modify how answers are calculated in order to improve the security performance. The protocol first contains a slow phase in which nonce are generated and exchanged [4], [7]. From this nonce and a secret value x, the possible response used in the first phase are computed via a function f. Then the fast phase consists of n consecutive rounds. In each of these rounds, the verifier picks a challenge ci, starts a timer and sends ci to the prover. When the prover receives the challenge he computes the answer ri and sends it back to the verifier as soon as possible. Upon reception of the answer, the verifier stores as well as the round trip time. Once the n rounds are elapsed, the verifier checks the validity of the answers, i.e., the n rounds, the protocol succeeds. Initialization, execution and decision steps are presented below and a general view is provided in Fig. 2.

Fig 2: Distance Bounding Protocol

Initialization. The prover (P) and the verifier (V ) agree on (a) a security parameter n, (b) a timing bound ‘tmax, (c) a pseudo random function P RF that outputs 3n bits, (d) a secret key x.

Execution. The protocol consists of a slow phase and a fast phase.

Slow Phase. P (respectively V ) randomly picks a nonce NP (respectively NV ) and sends it to V respectively P). Afterwards, P and V compute P RF (x, NP , NV ) and divide the result into three n-bit registers Q, R0 , and R1 . Both P and V create the function fQ : S ‘ {0, 1} where S is the set of all the bit-sequences of size at most n including the empty sequence. The function fQ is parameterized with the bit-sequence Q = q1 . . . qn, and it outputs 0 when the input is the empty sequence. For every non-empty bit-sequence Ci = c1 . . . ci where 1 ‘ i ‘ n, the function is defined as fQ(Ci) = Lij=1(cj ‘ qj ).

Fast Phase. In each of the n rounds, V picks a random challenge ci ‘R {0, 1}, starts a timer, and sends ci to P. Upon reception of ci , P replies with ri =Rcii ‘ fQ(Ci) where Ci = c1…ci. Once V receives ri , he stops the timer and computes the round-trip-time ‘ti .

Decis.ion. If ‘ti < ‘tmax and ri = Ri ci’ fQ(Ci) ‘ i ‘ {1, 2, …, n} then the protocol succeeds.

V. SECURITY ANALYSIS

Being resistant to both mafia and distance fraud is the primary goal of a distance bounding protocol. An important lower-bound for both frauds is (1/2) n [6], which is the probability of an adversary who answers randomly to the n verifier’s challenges during the fast phase. However, this resistance is hard to attain for lightweight DB protocols. Therefore, our aim is to design a protocol that is close to this bound for both mafia and distance frauds, without requiring costly operations and an extra final slow phase[5],[2].

A. Mafia Fraud:

A mafia fraud is an attack where an adversary defeats a distance bounding protocol using a man-in-the-middle (MITM) between the verifier and honest tag located outside the prover.

Fig 2(a): Mafia fraud

Among the DB protocols without final slow phase, those achieving the best mafia fraud resistance are round dependent. The idea is that the correct answer at the ith round should depend on the ith challenge and also on the (i-1) previous challenges.

B. Distance Fraud:

A distance fraud is an attack where a dishonest and lonely prover supports to be in the neighborhood of the verifier.

Fig 2(b): Distance Fraud

In mafia fraud, the best protocols in terms of the distance fraud are round dependent. However, round dependency by means of predefined challenges fails to properly resist to distance fraud. Intuitively [9], [7], the higher control over the challenges the prover has, the lower the resistance to distance fraud is. For this reason, our proposal allows the verifier to have full and exclusive control over the challenges.

C. Terrorist Fraud

A terrorist fraud is an attack where an adversary defeats a distance bounding protocol using a man-in-the-middle (MITM) between the reader and a dishonest tag located outside the neighborhood.

Such that the latter actively helps the adversary to maximize her attack success probability, without giving to her any advantage for future attacks. Terrorist fraud attack is not considered in our proposed system.

Fig 2(c): Terrorist Fraud

VI. PREVENTION TECHNIQUE OF ATTACKS

Different methods are used for prevention of these attacks. In the distance fraud the location will not be sufficient because the verifier does not trust the prover [5]. He wants to prevent a fraud prover claiming to be closer. Different type’s location mechanism that prevent these attacks are:

A. Measure the signal strength

Node can calculate distance from other node by sending it a message and see how long it takes to return. If response authenticated, fraud node can lie about being further away than it is, but not closer. Sender includes strength of transmitted message in message; Receiver compares received strength to compute distance.

B. Measure the Round Trip Time

Another solutions measure the round trip time. The round trip time is the time required for exchange a packet from a specific destination and back again. In this protocol the verifier sends out a challenge and starts a timer. After receiving the challenge, the prover does some elementary computations to construct the response. The response is sent back to the verifier and the timer is stopped. Multiplying this time with the propagation speed of the signal gives the distance.

C. Measure the Consecutive Time

Timing based input information followed by consecutive timing measurement provides more optimistic approach in authenticating the user. The verifier uses the time elapsed between sending its nonce and receiving the prover’s rapid response to compute its distance from the prover, and then verifies the authenticated response when it receives it. Our proposed system provides a proof breaks down concept if the prover is dishonest.

D. Validation and Identification

i. Validate the authentication information provided by the user

ii. Extract the MAC address to validate the request origin location

iii. Consecutive Execution time duration on the request processing.

VII. CIPHER BLOCK RIVEST ALGORITHM

Cipher Block Rivest Algorithm is used in our proposed system for encryption process. Fast symmetric block cipher. Same key used for encryption and decryption algorithm. Plaintext and cipher text are fixed-length bit sequences.

In cryptography, RC% is a symmetric-key block cipher notable for its simplicity. Designed by Ronald Rivest in 1994. RC stands for ‘Rivest Cipher’, or alternatively ‘Ron’s Code’ (compare RC2 and RC4). A key feature of RC5 is the use of data-dependent rotations; one of the goals of RC5 was to prompt the study and evaluation of such operations as a cryptographic primitive. RC5 also consists of a number of modular additions and exclusive OR (XOR). The general structure of the algorithm is a Fiestel-like network. The encryption and decryption routines can be specified in a few lines of code. The key schedule, however, is more complex, expanding the key using an essentially one-way function with the binary expansions of both e and the golden sources of nothing up my sleeve numbers.

The RC5 is basically denoted as RC5-w/r/b where

w = word size in bits,

r=number of rounds,

b= number of 8-bit in the key.

Cryptanalysis 12-round RC5 (with 64-bit blocks) is susceptible to a differential attack using 244 chosen plaintexts. 18-20 rounds are suggested as sufficient protection. Block Ciphers plaintext is divided into blocks of fixed length and every block is encrypted one at a time. The number of rounds can range from 0 to 255, while the key can range from 0 to 2040 bits in size [7]. Cipher text involves

C = E (PUB, E (PUA, M)

Cipher text can be generated by the encryption of public key with the private key associated in the source place. De-cipher text involves

M = D (PUA, D (PRB, C))

Actual message can be generated by public and private key followed by the consecutive timings.

BLOCK CIPHER

Defined as a cryptosystem with large plaintext space

P=C=Z2n

Typically n’64 bits

Round structure

Apply same function on the intermediate cipher text repeatedly Nr time.

Use different key Ki defined from K on ith round.

Pseudo code 1

1. INPUT: plaintext x, key K

2. OUTPUT: cipher text y=ek(x)

3. ASSUME: round function g, last function h, key scheduling procedure Ki

w0=x

For i = 0 to Nr-1

wi = g (wi-1,Ki)

y = g (wNr-1, K Nr-1)

VIII. EXPERIMENTAL AND COMPARISON

A. Error free environment

The first lightweight DB protocol was proposed by Hancke and Kuhn’s [11] in 2005. Its simplicity and suitability for resource-constrained devices have promoted the design of other DB protocols based on it [2], [13]. All these protocols share the same design: (a) there is a slow phase4 where both prover and verifier generate and exchange nonces, (b) the nonces and a keyed cryptographic hash function are used to compute the answers to be sent (resp. checked) by the prover (resp. verifier). Below, we provide the main characteristics of each of these protocols, especially the technique they use to compute the answers.

a) Mafia Fraud

Mafia Fraud

a) Tradeoff with memory constraint

Hancke and Kuhn’s protocol [11]. The answers are extracted from two n-bit registers such that any of the n 1-bit challenges determines which register should be used to answer.

Avoine and Tchamkerten’s protocol [2]. Binary trees are used to compute the prover answers: the verifier challenges define the unique path in the tree, and the prover answers are the vertex value on this path. There are several parameters impacting the memory consumption: l the number of trees and d the depth of these trees. It holds d ‘ l = n, where n is the number of rounds in the fast phase.

Trujillo-Rasua, Martin and Avoine’s protocol [12]. This protocol is similar to the previous one, except that it uses particular graphs instead of trees to compute the prover answers.

b) Distance Fraud

Mafia Fraud

b)Tradeoff without memory constraint

Kim and Avoine’s protocol [13]. This protocol, closer to the Hancke and Kuhn’s protocol [11] than [12], uses two registers to define the prover answers. An important additional feature is that the prover is able to detect a mafia fraud thanks to predefined challenges, that is, challenges known by both prover and verifier. The number of predefined challenges impacts the frauds resistance: the larger, the better the mafia fraud resistance, but the lower the resistance to distance fraud.

Mafia and distance fraud analysis in a noise free environment can be found in [12]. Fig. 3(a) and Fig. 3(b) show that the resistance to mafia fraud and distance frauds respectively for the five considered protocols in a single chart. For each of them, the configuration that maximizes its security has been chosen: this is particularly important for AT and KA2 because different configurations can be used.

In case of draw between two protocols, the one that is the less memory consuming is considered as the best protocol. Trade-off chart represents for every pair (x, y) the best protocol among the five considered ones. Fig. 4(a) shows that our protocol offers a good trade-off between resistance to mafia fraud and resistance to distance fraud, especially when high security level against distance fraud is expected. In other words, our protocol is better than the other considered protocols, except when the expected security levels for mafia fraud and distance frauds are unbalanced, which is meaningless in common scenarios.

Another interesting comparison takes into consideration the memory consumption of the protocols. Indeed, for n rounds of the fast phase, AT requires 2n+1 -1 bits of memory, which is prohibitive for most pervasive devices.

We can therefore compare protocols that require a linear memory with respect to the number of rounds n. For that, we consider a variant of AT [10], denoted n/3 trees of depth 3 instead of just one tree of depth n. The resulting trade-off chart shows that constraining the memory consumption considerably reduces the area where AT is the best protocol, but it also shows that our protocol provides the best trade-off in this scenario as well.

IX. CONCLUSION AND FUTURE WORK

The time stamp based distance bounding protocol has been introduced in this paper which provides the optimistic approach to identify the relay attack. This protocol deals with both mafia and distance frauds with less computer memory and additional computation. The analytical expressions and experimental results show that the new protocol provides best trade-off between mafia and distance fraud resistance. Such a performance is achieved based on the round dependent design where the prover is unable to guess any challenge with a probability higher than the 1/2.

For computer-intensive systems, our consecutive timed response provides significantly better throughput for a broad variety of scenarios, including the mafia fraud, distance fraud and terrorist fraud attack. The encryption and decryption can use more than one different algorithm on each round of the resistance, which provides more confidential services in the system.

REFERENCES

[1] Ronalndo Trujillo-Rasua, Benjamin Martin, and Gildas Avoine,’Disrance-bounding facing both mafia and distance frauds,’IEEE Transactions on Wireless Communications,vol 9, May 2014.

[2] Sangho Lee, Jin Seok Kim,Sung Je Hong, and Jong Kim, ‘Distance Bounding with Delayed Responses,’ IEEE Communications Letters, vol. 16, september 2012.

[3] Kapil Singh,’Security in RFID Networks and Protocols,’ International Journal of Information and Computation Technology, vol.3, pp.425-432, 2013.

[4] Ammar Alkassar,Christian Stuble,’Towards Secure IFF:Preventing Mafia Fraud Attacks,’Sirrix AG security technologies, Germany Saarland University,D-66123 Saarbrucken,Germany.

[5] Srikanth S P,Sunitha Tiwari,’A Survey on Distance Bounding Protocol for attacks and frauds in RTLS system,’International journal of Engineering and Innovative technology(IJEIT),vol.3,April 2014.

[6] J.H.Conway,’on numbers and games,’AK Peters,Ltd., 2000.

[7] Claus P.Schnorr,’Efficient signature generation by smart cards,’Journal of Cryptology, vol.4, no.3, pp. 161-174, 1991.

[8] Capkun, Srdjan and EI Defrawy,Karim and Tsudik, Gene. GDB: Group Distance Bounding Protocols, arXiv.org, 2010.

[9] S.Brands and D.Chaum, ‘Distance-bounding protocols,’in 1993 EUROCRYPT.

[10] G.Avoine, C.Lauradoux,B.Martin,How secret-sharing can defeat terrorist fraud, The 4th ACM Conference on Wireless Network Security,WiSec’11,pp.145-156.

[11] G.Avoine ‘RFID, Distance Bounding Multiple Enhancement’, progress in cryptography, pp.290- 307.

[12] J. Munilla, A.Painado, ‘Distance Bounding Ptotocol for RFID enhanced by using void challenges and analysis in noise channels’, compute 8(2008) 1227- 1232.

[13] J. Kelsey, B. Schneier, and D. Wagner. Protocol interactions and the chosen protocol attack. In Proc. 5th International Workshop on Security Protocols, volume 1361 of LNCS, pages 91{104. Springer, 1997.

Center Parcs

Strategy

A company has to stand for something in order to have success. You need to know where it is at this certain point and where you want the company to be in the future. To reach those goals in the future you have to have a strategy and so does Center Parcs.

The mission of Center Parcs is to let the guests experience a moment of happiness in a save and stimulating place. This is being created with the help of caring employees.

Their vision is that people need a place to connect with their friends and family. Therefore Center Parcs tries to offer a place where they can enjoy the simpel but yet special things in life and give the oppurtunity to just be yourself.

In the near future Center Parcs will be building new parcs. In 2015 they hope to open Center Parcs Vienne and in 2016 Village Nature (nearby Disneyland Paris). Center Parcs is innovative in the designs of their cottages. Some new cottages for example are tree houses, eden cottages and boats.

In the longer term they want to further develop their innovative desings and renovate the already existing parcs. In this way they want to distinguish themselves from the competition, offering short holidays which can not be found anywhere else. With this they want to be an inspiration towards their guests and be an recognizable ‘brand’.

Center Parcs’ most important visitors are families with children 0-11, this group accounts for 49% of the visitors, followed by families with children 12-18 and adults 18-54 with both 21%. Given the 49% of the families with children 0-11 it is presumable that this is the target group of Center Parcs. Center Parc is with 3.1 million visitors per year the European market leader. 1.3 million visitor have a Dutch nationality, this makes them the best represented nationality. Followed by the German with 806.000 visitors. The French account for 589.000 visitor and 372.000 are Belgian. the last 12.400 visitors have other nationalities.

The three biggest competitors of Center Parcs are Landal Greenparks, Dinseyland Paris and Roompot.

Landal Greenparks advertises the nature in their parks as well as Center Parcs. Landal Greenparks also focuses on young families and they offer many activities, outdoor and indoor. The parks are located in the Netherlands, Germany and Belgium and therefore they aim for the same group geographically speaking.

Disneyland Paris has several hotels in and surrounding the attractionpark. Each hotel has his own theme and atmosphere. They offer a attractionpark with mutiple activities and focusses on extended families, with this families with smaller childern.

Roompot also has parks in the Netherlands, Germany, Belgium and France. They also offer facilities for business people. Roompot’s parks are also located in nature enviroments and they advertise with the possibilty to cycling and hiking.

They are competitors of Center Parcs because they share the same target group. They focus on young families and are geographically all located in the same locations. They offer the same facilities and Landal and Roompot are cheaper than Center Parcs. This makes them competitors of Center Parcs.

3. Structure

Every organisation needs some sort of organisational structure in order to function. An orginasational chart shows how tasks are divided between departments and individuals. At Center Parcs they work according to the line and staff organsation. The most traditional organisational structure is a line organisation. Authority and accountibilty travel downwards from the top to the bottom. There is a strong hierarchy between the department managers and the department employees. One of the function at the top of the charts is the function of General Manager, the departments at Center Parcs all have to report back to the General Manager. Center Parcs has the following departments: Safety & Pool department, Leisure department, Technical department and the Houskeeping department.

In a line-and-staff organisation there are staff departments which support the line departments. In the staff departments there are experts in specific areas who advice and inform the line management in that specific area. The overall responsibilty belongs to the line manager and the staff departments are responsible for qaulitative advice. Center Parcs has two staff departments: Human Resource department and Finance department.

As you can see in de organogram up top the top manager is the General Manager, after that follows the manager of the department, after that the floor manager of the department and at the end of the chain comes de rest of the staff in that department.

If an employee of the Kids Club faces a problem he goes to the the Floor manager of the Kids Club. If the Floor manager cannot solve this problem alone he turns to the manager of the Leisure department. The Leisure manager reports the problem to the General Manager. If the Leisure mananger needs financial advice, he can contact the Finance department. If he has a question for the Human Resource department, he can also contact them.

As a Finance Manager you fulfil a management position as well as a executive position and carrie the responsibility for business analysis and reports of the park. He manages the budget of the park and pro-actively assist the other staff of the Management Team to meet set targets and budgets. Besides the responsibilty for an appropiate administrative organisation and internal control, the Finance Manager is also the financial oracle for the entire park organisation. Moreover the Finance Manager is partly responsible for the business implementation of the park, excluding the Horece & Retail Food activities.

5. Staff

Center Parcs has 11.600 employees of which 7.000 full-time and 4.600 are in part-time or other employement. Of all those employees, 66% is female and 34% is male.

19 % of the employees is under the age of 25. Most of the employees are between the ages of 25 and 45, in fact a large 51% is. Followed by the second biggest group of people over 45 with a 21%. Which leaves 9% left for the employees over 55.

The nationality that is most represented at Center Parcs are the French with 4960 employees. Followed by the Dutch with 2965 employees. Belgium accounts for 2511 employees, Germany for 926 employees and Spain accounts for another 238 employees.

The management functions are almost equally divided with 55% male managers and 45% female manangers. The techinal part is mostly in hand of the male employees and female employees overrule the housekeeping and reception departments.

Most of the employees of Center Parcs are native speakers. Especially in the higher positions, as for example the management positions. In those positions it is also expected that you speak more different languages, such as English, German, French or Dutch. All the positions in which you have contact with the customers it is important that you are a native speaker and in some cases it might even be necessary that you speak a little of the other lanuages. In the jobs such as housekeeping and maintenance it is not necessary to be a native speaker. Those jobs can also be performed by expats, since contact with the customer is exceptional.

7. Skills

A good hospitality performance will make your guests feel welcomed in your company, in this case in the park. As a company you should do everything within your power to let your guests have the most comfortable experience. This is important in all sorts of companies but even more so for companies within the hospitality industry such as Center Parcs. They can dustinguish themselve from the other competitors by offering quality service. Guests come to enjoy their holiday and with this comes a good experience of the park. For a stay at a bungalow park a big part of this experience is created by the behavior of the staff. Therefore ‘customer service’ skills is a requirement for the staff.

A good way to accomplish this is by offering good working conditions and rewarding them to the employees to keep them motivated. The employees will emit this positivity to the customers, which hopefully results in happier customers. Another way is by offer employees a training to learn about ‘customer service’ skills and how to put them into practice.

Another important point is that if problems occur the staff needs to respond professionally, the problems needs to be solved according the situation and immediately.

Waiting times are need to be kept as short as possible and the staff needs to be on time for an appointment or meeting.

Enviromentally speaking there should always be enough and nearby parking space. The guests have to easily find their way around the property. In smaller businesses like a restaurant or caf?? this is not essential, but at Center Parcs the guests have to be able to find their way around the park, to the reception, their bungalow, the pool etc.

The company has to have a corperate identity. The guests have to recognise the staff by the cloting and so on. The uniforms need to be appropiate and clean. The hospitality performance can also be improved by greeting the guests at the welcome and the receptionist or spokesperson for the park needs to be a native speaker and preferably be fluent in English, even better would be if they speak several languages.

And at last the facilities should be in working order and clean.

Center Parcs askes you to respond immediately in case of a complain, you can file a complain in the park so the management gets the chanche to solve it right away. If you feel like your complain is not handled accordingly you can send an e-mail to ‘[email protected]’ or send a letter by mail, this can be done untill a month after leaving the park. Still not sattisfied? Then you can file a complain to the ‘Geschillencommissie Recreatie’ and they will look at it, this can be done untill three months after leaving the park.

This shows that Center Parcs always tries to solves problems immediately and if that cannot be done, the guests are given mutiple oppurtunities to complain. That they try to solve it immediately and the chanches the guests get to file a complain are according to the hospitality performance.

This picture shows that Center Parcs has clear indication signs to lead the guests around the park. the numbers of the cottages are reffered to as well as all the other facilities in the park.

Center Parcs got elected top employer 2014. They have be scored on 5 different critertia: primary conditions, secundary conditions, training & development, career perspectives and cultural management. The fact that they received this quality mark shows that employees have good working conditions. Happy employees results, most of the time, in happy customers.

This picture show the corporate identity. Most of the employees wear a blue shirt with a Center Parcs logo on the sleeve, this is very recognizable for the guests and in this way they will know who to approach. The logo can be found around the park, on the internetsite etc. which also cnotributes to the corporate identity.

If you would like to become a receptionist at Center Parcs in the Netherlands, there are a few job requirements you need to meet. One of them is that you have sufficient oral and written knowledge of the English and German language. This is to make sure you can help guests from different nationalities and that contributes to the hospitality performance.

Health and safety – Capstan construction site in Kutno, Poland: online essay help

This report is written on 07/02/2015 following Health and Safety inspection of Capstan construction site in Kutno, Poland. Capstan project involves approximately three hundred people working on ap. 20 000 square meters, the aim is to build a snack factory. The factory will include main production building, utilities building, waste water treatment plant and other facilities.

The factory main building is already erected, wrapped with sandwich panels. The project has already reached its milestone but there is still a lot of works performed on site. Most of them, inside of the building. Civils, mechanical works, pipe installation (hot works) and Manufacturing machinery installation are main activities at the moment. There are also pressure and other tests carried on. The works are being performed by 8 contractors and their subcontractors. Each has at least one works supervisor and one first aider with certification. Construction Management company with 25 engineers staff is managing the project on the spot and supervising all the works.

Executive Summary

Inadequate access and egress routes inside of the main building make it a high-traffic area with too many pedestrians and mobile elevating work platforms or forklifts passing by every hour. Outside is not bad, pedestrian paths and a road for vehicles are both separated and kept clean. There are signs informing about speed limit and a ‘zebra’ straps marked for crossing the road.

Every task must be first planned and introduced in a method statement including time and equipment used (Occupational Health and Safety Regulations 2007),

Despite a large amount of time spent on trainings, working at height is an issue to be solved. Some of workers were wearing their safety harnesses incorrectly, or did not anchor themselves while working. Mobile Elevating Work Platforms should be operated only by certified operators. There was equipment out of date for inspection found on site. Working at heights is one of the biggest killers in construction, therefore there must be an extra care and awareness along all the people involved.

Hot works is a big issue on this contruction site. Cutting, grinding and welding are only allowed when Permit-to-work has been issued. Permits are clear instructions and a source of information for both workers and supervisors, yet lack of firefighting accessories, lack of fire watch after works are completed and poor housekeeping are everyday threat.

Cabling and electrical works should also be improved. Many cables were found lying without protection on the floors easy to be damaged by MEWP, they are a trip hazard for people and, if destroyed, may cause an electric shock.

Main findings of the inspection

Working at height

There are MEWP (mobile elevation work platforms), scaffolds and ladders used every day on site. Despite of a monthly inspection, some of the above equipment was found damaged and missing manufacture manual instructions. Before using MEWP, all the documentation of each machine must be checked (Ustawa z dnia 21 grudnia 200r o dozorze technicznym Dz.U.z 2013, poz. 963) according to Polish law. Also, this documentation should be on the workplace for all times attached to the machine. All the works at heights, including MEWP are marked as a dangerous zone. Workers must use a safety tape to barrier they area and protect people from falling objects.

All works at height must be performed with fall protection system. International (Amendment of the Provision and Use of Work Equipment Regulation 1998/2005) and Polish (Rozporzadzenie Ministra Pracy I Polityki Socjalnej, 26/09/1997 no. 169, 1650) laws are clear about this matter. To prevent falling and death risk, no worker is allowed to work at heights without ‘harness training’. Workers are using safety harness when working above 1 meter height. This PPE must be not only visually inspected each time before and after use but also annually checked by a proper company / third party organization. This inspection should result with a mark or certificate as ‘good to use’. Without such proof, no harness should be used.

Mobile scaffolds must be erected, used and maintained according to its design and manufacturer instruction to prevent displacement and collapse ( ILO R175, art.17) Only after an inspection made by a certified engineer, works on scaffold are allowed. Workers should always check by themselves if the scaffold is levelled, wheels are blocked and authorised to use by an inspector.

Lack of anchoring, safety harness worn too loose or damaged, missing parts of scaffolds are one of the most common problems found during inspection. The management must take care of these matters every day putting pressure on daily inspections of the equipment, placing barriers where applicable and wearing safety harness according to manufacturer instructions. The indirect costs of WAT accident may be crucial for an organisation, moreover, potencial loss of a human life and living with responsibility of somebody’s death are higher cost than any money.

Fire

Fire is one of the main dangers on construction site. Along with WAT, it can cost life, time ‘ dalay, loss of property, materials, equipment. It is much cheaper and easier to prevent fire than maintain the workplace after fire occure.

When hot works performed, there must be Permit- to- work system introduced. Also, ongoing supervision is a must in order to execute all the actions/ preventive measurements included in a hot work, method statement, risk assessment, safety plan and a legislation of a country we work in.

Before working, all the flammable materials must be removed, area should be barried and marked as a Hot Works Zone/Workshop. Fire blankets and extinguishers must be always on the work place and gas cylinders kept properly: stored vertically, minimum 10 meters from the flame, secured in order not to fall. Works can be performed after supervisor check the workplace comparing to measures and information from a Hot Work Permit.

Electricity

Every activity with using electricity must be supervised. Labels, LOTO procedures, Work Permit, are one of the ways to avoid shock, death and burns. Only certified electricians and engineers should have access to life electricity. All the cables and electrical tools must be checked by an authorised inspector once a month, and labelled according to a ‘sticker system’. Each month has its own colour and all the equipment missing current label is being removed from a work place immediately. In order to avoid damage and tripping, all the cables lying on the floor must be protected.

Welfare

The temperature inside of the building was less than 10C. Hand function drops very fast in the cold. Workers are forced to perform activities with safety gloves which are not warm enough for them, and tend to restrict hand movement. Workers are also at risk of cold stress injuries. According to International Labour Organisation (Ambient Factors in the Workplace, paragraph 8.4.) the employer has a responsibility to lower the risks connected with cold workplace. The temperature held in the work area is unacceptable.Workers must be provided with more heating.

Moreover, the lighting went on and off during the inspection making it very difficult for people to work, and creating a very dangerous Environment.

With that amount of traffic and works at heights, problem with lightening is a huge risk. Tripping and falling along with collisions are more likely to happen, especially during late/night shifts.

A Lack of breakfast break was also noted, workers are only allowed to have a lunch break. This is inadequate, especially taking under consideration the low temperatures in the workplace.

Housekeeping

Housekeeping standards aren’t bad, but can be higher. Tripping and slipping hazards should be taken under consideration and removed.

Chemical Substances

Chemical substances must be stored properly, marked with safety signs, labelled and kept as the manufacturer recommends. Material Safety Data Sheet (MSDS) should be attached in the area and available for everyone involved to read.

Conclusions

The amount of supervisors on site would be enough if not for a poor safety culture among engineers. Health and Safety matters are only HSE department problem according to many people on site. Safety should be everyone responsibility and priority.

Working at heights must be taken seriously at all times, ongoing inspections of MEWP, scaffolds, laders and fall arrest PPE is very important, sharing awareness throughout contractors by trainings should be carried out.

Fire is a grate cost in many dimensions. Therefore, Hot Works must be done as carefully as possible. The area must be checked before, during and after performing works. This takes less time, than delay and loss caused by fire.

Welfare is a big issue, the organisation must improve lightetnig and rise the temperature in order to avoid workers sick leave payments or legal threats.

All in all, the safety culture and supervision must be improved, Hot Works, Electrical works must be authorised by a Permit- to- Work, WAT supervised. Each activity on site is a potential risk of harm for people working around.

Pesticides

Pesticides are designed to kill, supposedly insects but not humans. Numerous pesticides are sprayed upon crops to keep them glamorous enough until they reach grocery stores and last until consumption. Even though assessing detailed health effects is still a scientific challenge, the accumulated health risks and detriments cannot be completely ignored. These toxins have been developed to eradicate insects but are causing side effects to our health too. The pesticide residues remaining on what we consume each day have proven to be harmful and can be attributed as the source of many untraceable and unexpected health problems in the long run.

While leading a more natural and healthy lifestyle has been under the spotlight in this age of artificialization, pesticide usage could be targeted as a root cause for many chronic diseases and prolonged health effects. The chemical composition of certain pesticides has been discovered to hinder the biochemical processes of vital organs

As soon as a pesticide enters the passageways of the body and comes in contact with any cell, it chemically reacts with its newfound surroundings, lowering the toxicity. These counter effects occur to make it more water soluble and easier to excrete out of the system. The body reacts to each pesticide depending on its chemical structure and properties including its shape, size, electronic charge, and how stable it is, defining how soluble the pesticide will be in different solutions and surroundings. These unique characteristics can determine if the substrate, or the reacting pesticide molecule, can bind onto its complementary receptor site on cells. As electron distribution changes and energy transition states are lowered, chemical reactions via enzymes instigate the formation of products, in this case the catalysis of a biological process.

Enzymes are the proteins present in our body that speed up the reactions taking place within us at a desired rate for life to sustain. The presence of pesticides interrupts the rate of habitual routines that the body performs. Any interruptions or interferences in the midst of these enzymatic processes can trigger a toxic response, acting as an inhibitor to the natural reactions. Homeostasis is disrupted as a result, creating numerous imbalances that cause organs to respond in unpredictable manners.

Sodium bisulfite is one of the ordinary pesticides that prevent the browning and blackening of fresh fruits, vegetables. Known as sodium hydrogen sulfite, has a chemical formula of NaHSO3. Refer to its structural formula in Figure 1. This chemical combination has the properties of readily reacting against dissolved oxygen. In addition, it discharges sulfur dioxide as a byproduct in the presence of water, which inhibits bacterial and fungal growth caused by common chemical reactions (3).

Generally, any foreign substance that goes into the lungs elicits reactions which set off asthma attacks as the airways in the lungs contract and restrict airflow. The presence of pesticides in the respiratory system creates a more hyperactive and aggressive response as the constriction of airways causes bronchiole contractions to occur, causing wheezing and breathlessness. The high level toxicity of pesticides that engulfs the lungs through residues from food puts your respiratory system under enormous stress, which is detrimental to victims of asthma.

Organophosphates are a type of phosphoric acid consisting of the elements phosphorus, carbon, and hydrogen. Refer to the structural formula in Figure 2. Especially seen on fruits we eat every day, organophosphates are repeatedly sprayed on pounds and pounds produce to keep them insect-free enough to catch your eye at the grocery store even though the EPA classifies them to moderate to highly toxic. Tests conducted by the USDA in 2008 found that ‘95% of celery tested contained pesticides, and 85% contained multiple pesticides, 93.6% of apples, 96% of peaches, a single blueberry has residue from 13 different chemicals,’ (HuffPo).

The organic molecule, whose purpose is to function as a neurotoxin to insects has been now discovered as an inhibitor to multiple processes. Since cells aren’t physically connected to one another, the synapses of neuron relay messages to the brain. These ‘bridges’ forming the entire network throughout the body are broken when hit by organophosphates. In the nervous system, the organophosphate is accountable for restraining the flow of acetylcholinesterase (AChE), an enzyme crucial to nerve transmissions of impulses between our cells or a neurotransmitter. The breaking down of this particular enzyme signals when one transmission is completed and another is ready to begin. Figure 3 displays how the toxin enters the neuron and obstructs its path.

Synthetically composed pesticides in today’s industry are causing health deterioration greatly impacting children at a young age. Children in general are about twice as more vulnerable to intoxication by pesticides as eating and drinking more, they take in more pesticides and toxic chemicals relative to body weight. Their undeveloped organs are unable to filter out hazardous chemicals, which remain in their system and overtime produce a negative effect in several mediums.

The excess buildup of AChE makes young children whose bodies are still in growing stages, extremely susceptible to neurological damage and dysfunction, preempting problems in behavior and cognition. The conditions of an over-stimulated nervous system is the defining neuropathic cause of ADHD, holding about 5.9 million children captive in the United States (CDC 2015).

High percentages of residues present in common foods we eat daily, the vast use of these human-composed toxins is directly unsafe. The exact chemical arrangement of these, have been proved to collide with the cellular functions performed to sustain life within us. While organic foods are considered healthier but also more expensive, currently less than 1% of US farmland has been adopted for implementation of organic farming systems. As we wait for organic farming to amplify and become more affordable, an alternative resolution is the evolution of biopesticides. Recently, the federal government has proposed to increase research in alternate solutions using biotechnology under the National Bio-economy Blueprint. They have given their consent to find microbiological substances and microorganisms to synthesize chemicals replacing toxic pesticides.

Benefits And Disadvantages Of Different Research Methodologies As Tools For Conducting Independent Legal Research At Postgrad Level

Discuss and critically assess the benefits and disadvantages of at least two different research methodologies as tools for conducting independent legal research at postgraduate level

Library based or doctrinal research and socio-legal methods

Legal research methodology denotes the exposition, the description or the explanation and the justification of methods used in conducting research in the discipline of law.  At the postgraduate level, legal research may be carried out by utilising one or more of number of different techniques or methodologies. These different methodologies include, inter alia, doctrinal or library-based research, comparative law methods, socio-legal methods and philosophical legal methods.  In this discussion we focus on two methodologies that are often employed by legal researchers. They include doctrinal or library-based research and socio-legal methods.

Doctrinal or library-based methodology: Some key aspects

Doctrinal or library based research is the most common methodology employed by those undertaking research in law. In a nutshell, library-based research is predicated upon finding the ‘one right answer’ to a particular legal questions or set of questions.  Thus, the methodology is aimed at specific enquiries in order to locate particular pieces of information. For example, an investigation may be conducted into the legislation encompassing child abuse in a particular jurisdiction. It may also be sought to find out what specific section within the said legislation is actually applicable. All these questions have definite answers that can be found and verified. Such kind of questions are the domain of doctrinal or library-based research.

The key steps  in library-based research are often infused. These steps include analysing and unpacking the legal issues in order to identify the issue or issues which need further research. This stage will often  involve a significant amount of background reading in order for the researcher to orient herself or himself with the area of law being studied. Background reading will often include sources such as dictionaries for definition of terms (and possibly a list of cases or legislation where they have been used), encyclopaedias for a summary of the legal principles accompanied by footnoted sources, major textbooks and treatises on the subject and journals.

Secondly, having established the issue requiring further investigation, the researcher must determine the relevant rule or rules of law applicable to the identified issues. This stage involves locating and analysing the relevant primary material. Depending on whether the research is based on international law or domestic law, the primary material will include treaties, declarations, statutes and delegated legislation and case law. Although primary sources are adequate by themselves, it may also be useful to have regard to secondary sources. This observation is made  light of the fact that oftentimes, the concepts and standards that are embodied in the international conventions, legislation and cases will have been investigated, analysed and elucidated by many different authors in a variety of contexts and from wide ranging perspectives. These writings constitute an important resource for understanding and elaborating the principles in the primary legislation. Consequently, use must be made of relevant books and journal articles on that particular area of law.

Thirdly, having established the relevant rules, the research then must set out to analysing the facts in terms of the law. This is perhaps the most critical stage of the doctrinal or library-based methodology as it seeks to marry the issues that were identified with the applicable rules. All the issues that are sought to be investigated must be synthesised in the context of the applicable legal rules

Lastly, having conducted the analysis, the researcher must then come to a probable conclusion which is based on the facts established and the law considered.

Evaluating the doctrinal or library-based methodology

Merits

There are several advantages associated with library based research methodology. Firstly, it is the traditional method for conducting legal research and is often taught during the early stages of legal training. Consequently, most legal researchers will be familiar with the techniques involved by the time they embark upon postgraduate research. Additionally, there will be no shortage of experts who are able to offer doctrinal research training to new postgraduates.

Secondly, because of its omnipresence in law schools ad law offices, research carried out under this design is likely to be more accepted as having the character of legal research.  Doctrinal research still represents the ‘norm’ within legal circles, and most operational, undergraduate and even higher degree work will be based on the doctrinal framework. For practical purposes, and for resolving day-to-day client matters, doctrinal research is the expected and required methodology. The busy practitioner (and the standard product of law schools) tends to be concerned with the law ‘as it is’ and rarely has the time to consider research that does not fit within that paradigm and timeframe.

Furthermore, because of its focus on established sources, doctrinal research is more manageable and its outcomes more predictable. For a postgraduate researcher this may help with meeting deadlines as surprises may be contained.

Demerits

Several criticisms may be levelled against doctrinal or library based methodology. For example, it is too theoretical, too technical, uncritical, conservative, trivial and without due consideration of the social, economical and political significance of the legal process.

Secondly, it must be observed that doctrinal research is too restricting and narrow in its choice and range of subjects. The legal profession is increasingly being pulled into the larger social context.  This context encompasses legal and social theory, and it encompasses other methodologies based in the natural and social sciences. In studying, the context which the law operates and how the law relates to and affects that context, doctrinal methodology does not offer an adequate framework for addressing issues that arise because it assumes that the law exists in an objective doctrinal vacuum rather than within a social framework or context.

Thirdly, doctrinal research is sometimes described as trivial because it is often conducted without due consideration of the social, economical and political significance of the legal process. As noted above the law does not operate in a vacuum. It operates within society and affects the society. There is, therefore, scope for adopting and adapting other methodologies utilised in other subjects in order to have more illuminated view of the law and its functions. For example, there is scope for further research regarding the workings of legal institutions, such as the courts in order to increase their efficiency.  As Julius Getman has commented, ‘empirical study has the potential to illuminate the workings of the legal system, to reveal its shortcomings, problems, successes, and illusions, in a way that no amount of library research or subtle thinking can match.’

It is obvious from the above criticism that, lawyers may need more than doctrinal or library based research skills in order to make their research more relevant for the wider world. One of the methodologies that may be employed in this regard is the socio-legal method.

Socio-legal methodology: Some key aspects

The law is a critical part of our social world. Commenting on this observation, Leslie Scarman has emphatically stated that:

There is no cosy little world of lawyers’ law in which learned men may frolic without raising socially controversial issues-I challenge anyone to identify an issue of law reform so technical that it raises no social, political or economic issue. If there is such a thing, I doubt if it would be worth doing anything about it.

Thus, the recognition that the law operates in a wider social context has led to the development of socio-legal methodology as a framework for conducting legal research. In a nutshell, socio-legal methodology embraces disciplines and subjects concerned with law as a social institution, with the effect of law, legal processes, institutions and services, and with the influence of social, political and economic  factors on the law and legal institutions. Consequently, because of its association with so many dynamics, the socio-legal method is diverse and encompasses a wide range of theoretical perspectives.

The first step in socio-legal inquiry is the selection of a topic after a review of relevant literature and preliminary discussions with those with practical experience of the issues. Once the topic is selected, the researcher must then come up with general problem statement and a possible hypothesis for dealing with the said problem. This step is then followed by concentrated exploration and literature review aimed at further refining the problem statement and hypothesis.

Once, this preliminary stage is completed, the researcher selects and designs his or her research methodology. This is a very important step in the methodology process as this step ultimately determines the validity and quality of the research findings that will be produced at the conclusion of the project. Postgraduate socio-legal scholars may adopt quantitative or qualitative research techniques or both depending on the subject matter under investigation.

Quantitative research may be construed as a research strategy that emphasises quantification in the collection and analysis of data. It entails a deductive approach to the relationship between theory and research in which the emphasis is placed on testing of theories. It also incorporates the  practices of natural scientific model.  In other words, quantitative research methods insist on the control of the research to limit the number of variables affecting the outcomes; exact measurement and precision; the ability to repeat the experiment with similar outcomes and the testing of the hypothesis through statistical means.  By contrast, qualitative research may be construed as a strategy that words rather than quantification in the collection and analysis of data. It predominantly stresses an inductive approach to the relationship between theory and research in which the emphasis is placed on the generation of theories. Significantly, qualitative research rejects the practices and norms of the natural scientific model in preference for an emphasis on the ways in which individuals interpret their social world and it embodies a view of social reality as constantly shifting emergent of individuals’ creation.  Thus, qualitative research methodology acknowledges that there is not one reality but rather that reality is situational and personal, and may therefore vary between individuals and between situations.

Once the postgraduate researcher adopts the research technique suitable for the inquiry in question, he or she must then proceed to collect his or her data using the research design. At this stage qualitative and quantitative techniques will differ. Qualitative research interviews, for example, are less structured than their quantitative equivalent, and consist ideally of an exchange of ideas between the researcher and the interviewee on a particular theme. The process is not directed towards quantifying the  issues being researched but rather towards providing new insights and awareness of the issue under discussion.

This difference is apparent when one compares the tools that are utilised when employing the two techniques. Quantitative research will often employ devices such as surveys and questionnaires to collect the required data. These may include closed questions which result in easy statistical summaries, or open questions, which allow for a more lengthy, qualitative and individual response. There are advantages and disadvantages to the reliance on surveys in undertaking socio-legal research. On the positive side, surveys or questionnaires are relatively easy to draw up and administer and they provide a bulk of straight information that is easy to  analyse. It is a good method for gathering opinion information and further to that, the anonymous nature of questionnaires may lead to candid responses. On the other hand, questionnaires make it impossible to find out additional information once the instrument is returned as they are usually anonymous. Furthermore, since questionnaires and surveys result in a bank of data, they do not provide the richness and depth of information available with other methods and if there is something missing from the form, then it is very expensive in time and money to fix the errors.

Qualitative techniques on the other hand rely on devices such as ethnography, biography or case studies in the process of data collection.  These methods allow the researcher to get the insider’s viewpoint of the matter in issue and not necessarily the objective truth. They allow the researcher to conduct in-depth studies of a specific group or individual chosen to represent social phenomena and allow the researcher to ‘access the reality behind appearances’.  These techniques have the obvious advantage that they provide opportunities to verify responses by comparing a number of different approaches in resolving an issue. They allow for the complexities of social, legal and political interaction to be seen and for the relationships between these and the effects of one on the others to become more obvious. Furthermore, qualitative techniques allow the researcher to delve deeper into inconsistent responses and analyse significant situations at greater depth. The drawbacks for these techniques include the absence of statistical validity of a proper sample and objective quantitative proof. Furthermore, there is the omnipresent risk of people changing their positions or acting up because the know they are being studied. The data may also be more reflective of the researcher’s views rather than the subjects’ because there is more latitude for researcher bias in the actual choice of the individual or case to be examined.

Despite the shortfalls associated with quantitative and qualitative research techniques, they offer the socio-legal scholar important tools for analysing the law within its operational context. As was noted earlier, these methodologies do not exist in isolation from each other and may be employed to reinforce the shortcomings of one approach.

Having collected the relevant data from the field, the researcher must then analyse and interpret the data and come up with his or her conclusions. This is an important stage of the research as the researcher will be able to comment on the state of the law, whether it is effective or changes are needed, and obviously if there is need for more socio-legal work.

Evaluating socio-legal methodology

Merits

There are a number of benefits associated with adopting socio-legal research methods. First of all, it allows legal practitioners and academics to experience the law in action. This is hardly possible within the realm of doctrinal research.

Further to that, socio-legal research avoids too much attention on rules of law and instead affords systematic and regular reference to the context of the problems which laws were supposed to resolve, the purpose they were to serve and the effect they in fact have. This serves to counter the charge that law is conservative and aloof from the social context within which it operates.

Socio-legal research is significant because  in linking the law to society, it functionalises law, rendering it an effective instrument for the achievement of social, political and economic objectives. Socio-legal research is important for and impacts upon government policy-makers, regulators, industry representatives and other actors concerned with the administration of justice and the legal system.

More importantly, socio-legal methodology is by nature inter-disciplinary and, therefore, allows the building of bridges between the law and other disciplines such as economics, history, sociology, politics, etc. This is beneficial because it adds more relevance to the law as well present the law appropriately, that is as a small part of a larger social world.

Demerits

Socio-legal methods have got disadvantages of their own. For example, social science findings are perceived as malleable and unstable and the perception seems to be that the outcomes from socio-legal research are dependent on the way in which the results have been interpreted. Consequently, confronting lawyers (most of whom possess an almost instinctive doctrinal mindset) with the results of socio-legal research is as hard a task as any.

It is possible to arrive at different conclusions on the same question when employing socio-legal methods because of differences in specification within the research design, or because of different methods of collecting data, or perhaps simply because the question being researched are marginally different and this is unrecognised. This uncertainty in outcome only serves to reinforce lawyer’s bias against socio-legal methods. However, it must also be noted in this regard that the objectivity and neutrality of the law, which has been assumed by most lawyers has come under attack. Thus, the same anxieties that exercise lawyers minds over the objectivity of doctrinal methods also apply to the socio-legal tradition.

Further to that, socio-legal enquiry is perceived as being unsuitable for the work that practising lawyers do. This tribe of lawyers is used to dealing with specific cases as opposed to investigating the broader aspects about the world and events affecting society generally. Lawyers and their clients generally want definite answers to particular questions as opposed to generalised responses.  Socio-legal methods usually state results in terms of generalities and this is obviously unsuitable for practical legal application.

Socio-legal research is quite difficult to carry out because the majority of lawyers simply do not receive adequate, if any, instruction in the intricacies of this methodological regime.  Most lawyers commencing postgraduate work will be products of ‘straight’ law degrees and will often have no appreciation of the existence of other methodologies, apart from doctrinal legal research.

Lastly, socio-legal methodologies require a large amount of time for locating the issues and carrying out research. To compound this, there is always the likelihood of failure especially if there are problems with the research design. As was noted above, there are an extensive number of techniques available. Time must, therefore, be spent choosing the most suitable method for collecting data. Even a small survey can require a lengthy preparation period for drafting and redrafting the survey questionnaire, testing the survey, producing and printing the questionnaire, selecting a valid sample, applying the questionnaire, collecting the data and then organising it according to category. Only after this has been done can the researcher reflect on the results in relation to the original hypothesis. This laborious process makes socio-legal methods less attractive.

Concluding remarks

It is easy to target a particular methodology and outline its strengths or weakness. However, it must be noted that all legal research methodologies are ultimately a means of arriving at answers raised in the course of attempts to understand issues arising within the law. There is no hierarchy amongst methodologies as all of them are equally important for the development and understanding of law. What is crucial is that researchers should try and equip themselves with the necessary skills to enable them comfortably suit  their research requirements. A well-versed researcher will without doubt be alive to the merits and demerits of any particular methodology and will work to counter these negatives resulting in better quality work. Often, the combination of methodologies such as doctrinal and socio-legal (or even techniques within a particular methodology such as quantitative and qualitative methods) serves to bring about a better understanding of the law and postgraduate scholars would do well to equip themselves with alternative research methodologies.

Bibliography

Books

Amin, SH (1992) Research methods in law (Glasgow: Ryston Publishers).
Bryman, A (2000) Social research methods (Oxxford, New York: Oxford University Press).
Burgess, RG Analyzing qualitative data (London: Routledge).
Burns, R (1996) Introduction to  research methods (Melbourne: Longman Cheshire).
Channels, N (1985) Social science methods in the legal process (New Jersey: Rowman & Allenheld).
Ewick, P; Kagan, R; & Sarat, A (1999) (eds) Social science, social policy and the law (New York: Russell Sage).
Fink, A (1995) How to ask a survey question (Thousand Oaks: Sage)
Kvale, S (1996) Interviews: An introduction to qualitative research interviewing (Thousand Oaks, California: Sage)
Loh, W (1984) Social research in the judicial process: Cases, readings, and text (New York: Russell Sage Foundation).
Rossi, P; Wright, J & Anderson, A (1983) A handbook of survey research (New York: Academic Press).
Scarman, L (1968) Law reform: The new pattern (London: Routledge & Keegan Paul).

Articles

Cownie, F (2004) ‘Researching (socio) legal academics’ 42 Socio Legal Newsletter p. 1
Getman, J (1985) ‘Contributions of empirical data to legal research’ 35 Journal of Legal Education 489.
Holmes, OW (1896-1897) ‘The path of the law’ 10 Harvard Law Review p. 469.
Mason, J (1994) ‘Linking qualitative and quantitative data analysis’ in A Bryman and
Mullane, G (1998) ‘Evidence of social science research: Law, practice, and options in the Family Court of Australia’ 72 Australian Law Journal 434.
Sjoberg, G; Williams, N; Vaughan, T & Sjoberg, A (1991) ‘The case study approach in social research’ in JR Feagin, A Orum & G Sjoberg (eds) A case for the case study (Chapel Hill,
London: The University of North Carolina Press) p. 39.

Internet resources

‘Blair tries to move on from Iraq’ available at <http://news.bbc.co.uk/1/hi/uk_politics/ vote_2005/frontpage/4496029.stm> accessed 28 April 2005.
Socio-Legal Studies Association (1999) ‘Response of the Socio-Legal Studies Association to the National Centre for Research Methods consultation on the shape of the research and training programme’ available at <http://www.kent.ac.uk/ slsa/download/ncrm.doc> accessed on 28 April 2005.

Is the prime minister too powerful?: essay help online free

There are a lot of political issues in Great Britain today. United Kingdom is a large, industrialized democratic society and as such it has to have politics and therefore political issues. One of those issues how should executive branch work and whether the Prime Minister has too much power. Right now in Great Britain there is a great debate on this issue and I am going to examine it in detail. The facts I have used here are from different writings on British politics which are all listed in my bibliography, but the opinions are my own and so are the arguments that I used to support my views.

First let me explain the process through which a person becomes a Prime Minister. The PM is selected by the sovereign. He (or she) chooses a man who can command the support of majority of the members of the House of Commons. Such a man is normally the leader of the largest party in the House. Where two are rivals in a three party contest such as those which occurred in the 1920s he is usually selected from the party which wins the greatest number of seats. The Prime Minister is assumed to be the choice of his party and nowadays, so far as he can be ascertained, participation of a monarch is a pure formality. Anyone suggested for this highest political office obviously has to be a very smart and willing individual, in fact it has been suggested that he be an “uncommon man of common opinions”(Douglas V. Verney). Not all Prime Ministers fitted this bill exactly, but every on of them had to pass one important test: day-to-day scrutiny of their motives and behavior by fellow members of Parliament before they were ultimately elected to the leadership of their party. Unlike Presidents of the United States all Prime Ministers have served a long apprenticeship in the legislature and have been ministers in previous Cabinets. Many Presidents of our country have been elected and on many occasions they have never even met some of their future co-workers, such as case of Kissinger and Nixon who have never even met prior to Nixon’s appointment.

Let’s now examine the statutory duties and responsibilities of the Prime Minister. Unlike the United States where the President’s duties are specifically written out in the Constitution, the powers of the Prime Minister are almost nowhere spelled out in a statute. Unlike his fellow ministers he does not receive the seals of office: he merely kisses the hands of the monarch like an ambassador.

The Prime Minister has four areas of responsibilities. He is a head of the Government; he speaks for the Government in the House of Commons; he is the link between the Government and the sovereign; he is the leader of the nation. He is chief executive, chief legislator and chief ambassador. As we can see the PM has an wide range of powers, maybe too wide. As head of the Government the Prime Minister has the power to recommend the appointment and dismissal of all other ministers. Far from being merely first among equals, he is the dominant figure. Ministers wait in the hall of PMs office on No.10 Dowling Street before being called into the Cabinet room. He may himself hold other portfolios such as that of Foreign Secretary(as did Lord Salisbury) or Minister of Defense (as did Mr. Churchill). He has general supervision over all departments and appoints both the Permanent Secretary and the Parliamentary Secretary. The Cabinet office keeps a record of Cabinet decisions to make sure that PM has up to date information. He controls the agenda which the office prepares for Cabinet meetings. There is a smaller Prime Minister’s Private Office which consists of a principal private secretary and a half a dozen other staff drawn from civil service. Perhaps owing to American influence the two offices are becoming increasingly popular and there are signs that the Prime Minister is no longer content to be aided by non-political civil servants. There is little doubt that if he chooses the PM can be in complete command of his Cabinet.

The PM must also give leadership in the House of Commons, though he usually appoints a colleague as Leader of the House. He speaks for the Government on important matters-increasingly, questions are directed to him personally-and controls the business of the House through the Future Legislation Committee of the Cabinet which he appoints mainly from the senior non-departmental ministers. Since the success of his legislative program depends mainly on support of his party he must as a party leader attend to his duties and ensure that the machinery of his party is working properly and in the hands of men he could trust. Basically the PM controls his party and in essence he controls the Parliament, but that is not all. The PM alone can request the sovereign to dissolve the Parliament and call a new election, it is open to debate whether it is this power

to allow him the control of the party and the Parliament. I agree with this argument completely because if the PM doesn’t like the way it is going with his party he can always announce new election so the Parliament pretty much backs up whatever the PM proposes. This is my main argument for this paper. In United Kingdom there is no system of checks and balances like there is in United States. In UK the PM and the Cabinet make a decision which is then almost blindly supported by the Parliament. A real democracy cannot function this way where there is one person of power and the rest can hardly do anything about it. Members of the majority party will not go against the will of PM because it means going against the will of their own party and that is unheard of in England, members of the opposing party cannot do anything because they are a minority. The Queen herself is a figure-head and does not have any real power. The PM is a link between the monarch and the Government, he keeps the Queen aware of what goes on with the Cabinet, the Government and the world at large. Although the Queen is a fictional figure and has no real power she can damage the reputation of the Government and the entire country by one careless word. It is the Prime Minister’s responsibilities to keep the monarch well informed. Other ministers however can only see the monarch with the PMs permission (the monarch however can see whomever she chooses). As we can see, here is another illustration of PM having too much power. He basically has an exclusive relationship with the monarch and controls who can see the Queen and who cannot. In US this is unthinkable, any congressman can request an audience with the President if he wants and if let’s say the Chief of Staff wanted to limit that in any way then he would run into some serious problems.

Finally the PM is the leader of the nation. In time of crisis the people expect him to make an announcement and to appear on television. Increasingly he should be a man who can not only secure the confidence of House of Commons, but of the man in the street or rather the man in the armchair in front of the television. Elections are ostensibly fought between two individual parliamentary candidates, but in practice they are contests between national parties which offer their own political and economical programs. The parties convey an “image” to the nation through the voice and appearance of their leaders. The Prime Minister must outshine his rival, the Leader of the Opposition. In the 1964 election, when the Liberals doubled their vote, much importance was attached to the TV performance of the Liberal leader, Jo Grismond.

The Head of State and traditional “symbol of the Nation” may be the Queen and the Royals, but the chief executive is in reality the PM. It is to his desk that ultimately all difficult problems come whether these involve participation in NATO, the balance of payment crisis, the budget-or even the royals’ love affairs(as in 1936 and again in the 80’s and 90’s). It is the PM that has to symbolize his country’s policies abroad and it is he who must personally convince political leaders in other countries that his Government can be relied upon.

The Prime Minister is also chief legislator. Through the Future Legislation Committee, he determines which bills the House of Commons will discuss during the session, and can attach whatever importance he chooses to the Immigration Bill or Steel Nationalization Bill. With few exceptions bills are introduced in the House by the Government and if they are important they require the backing of the Premier.

Also he is the chief administrator. Not only does he supervise the departments and chair Cabinet meetings but he directs the Cabinet Office and the Office of Prime Minister. In economic affairs he decides governmental strategy in conjunction with his Chancellor of the Exchequer and Minister of Economic Affairs, if there is one, and leaves these ministers to implement his policies. In defense policy he chairs the Defense Committee of the Cabinet, leaving the details to the Secretary of Defense(Army, Navy and Air Force) and the Chiefs of Staff. Foreign Affairs, normally the responsibility of the Foreign Secretary, require the intervention of the PM when really important decisions have to be made.

As we can see the PM is potentially a very powerful figure. Everything depends on how he chooses to use this power and the success with which he delegates some of his responsibilities.

All PMs have had an inner circle of ministers to which he turns when quick decisions have to be taken. The more important departmental ministers tend to be the Foreign Secretary, the Home Secretary and the Chancellor of the Exchequer; but these may not compose the inner circle of the given PM. Senior ministers don’t have to be the members of the inner circle. They usually are, but not all the time. The Cabinet is usually as follows: the PM, three to six inner circle members and the remainder of the Cabinet which number about fifteen. I think it is obvious to see why the PM needs an inner circle. In United States for example the President can approve the appointment of a person to a high political position without having ever met him/her. In Britain this would sound ridiculous, all major political figures know each other for years having probably gone to same schools together. The Brits believe that good friends make good decision makers which to me sounds very reasonable. This fact can be viewed from two different perspectives: some people say that when a new PM is elected he usually appoints all his friends to high positions by doing this he creates an inner clique with which he governs as an absolute ruler, the opposing view says that you need to know your colleagues for years in order to successfully work with them. Both views have a point and this is a very hot topic in British politics right now.

Human Resource Management, Planning and Development, and Performance of McDonald’s restaurant

Introduction

The objectives of this report are to look at the Human Resource Management (HRM), Human Resource Planning and Development (HRP & D), and Performance of McDonald’s restaurant. Furthermore, it explains the human resource management activities, models of human resource management, effectiveness of organisational objectives, performance monitoring of McDonald’s restaurant. The human resources of any organisation are the most important resource that is the direct connected between the quality of the workforce and ultimate goals. The human resource department of McDonald’s is responsible for right people, right number, right jobs, right time, right cost with right knowledge, skills, experience in the right place and also responsible for training of all staff and keeping evidence on them. It also indicates us what is the usage of human resource management and how does it work in the company.

McDonald’s fast food restaurant began in America in 1954. It is leading food service retailer around the world with more than 30,000 restaurants in 119 countries serving 47 million customers each day. Moreover, McDonalds is one of the world most well-known and valuable brands and increasing share in the globally. Now it is recognised worldwide establishment and first restaurant in the UK. Today more than 2.6 million people of the UK trust about McDonalds and go to eat due to provide good food with a high standard, quick service and value of money.

Task-1
Human Resource Management (HRM)

Human resource management is the people management function where organizational function is fulfilled and focuses on the issues related to people for example compensation, performance management, organization development, safety, benefits, employee motivation, communication, administration ,training and, etc.

As defined by Storey in 1995, ”HRM is a distinctive approach to employment management which seeks to achieve competitive advantage through the strategic development of a highly committed and capable workforce, using an integrated array of cultural, structural and personnel techniques.”

Human Resource Management (HRM) Activities

Human Resource Management (HRM) deals with the ‘Human’ feature of an organisation. There are many activities of an organisation to achieve their ultimate goals. To cite an example Recruitment and selection, training and development, human resource planning, provision of contracts, provision of fair treatment, provision of equal opportunities, assessing performance of employees, employee counseling, employee welfare, payment and reward of employees, health and safety, dismissals and redundancy, and etc. I am explaining three of them as below

Recruitment and Selection

In an organization, a change happens in the level of employees where HR department maintain the requirement of personnel to meet the demand. For effective recruiting, recruitment process can be costly and takes a great deal of time to set up. It includes on what jobs needs, advertising, application, identifying who best meet the criteria, interviewing candidates, finally selecting the best candidate for the post, and etc. McDonald’s advertise their job vacancies on their specialized websites and use their own personnel department to recruit staff.

Objectives of the Recruitment and Selection

The objectives of the recruitment and selection process of human resource management are as below

To identify the most appropriate candidate to fill each post.
Keep the cost of selection down
Making sure that necessary skills and qualities have been specified and developing a process for identifying them in candidates
Make sure that the candidate selected will want the job, and will stay with McDonald’s
To make the most of the effectiveness of the McDonald’s recruitment and selection practices.

Achievement of the Recruitment and Selection

In the McDonald’s, recruiting process is run throughout the year. Like other organisations, McDonald’s recruit internally and externally, and they for the most part recruit their managers and Assistant managers internally rather than externally, because it is easier and less training is needed due to the candidate is well known about the job. Just about 50% of McDonald’s salaried managers are promoted from within McDonald’s.

In addition, for preparing the job description, McDonald’s describe the job title, department, location, the responsibilities, the job purpose and duties.

McDonald’s uses the application form with questions which are typical questions, to know what qualification the applicant has, such as knowledge, skills, experience, and etc. Application form fill-up and make the answer of question is the first step for applicant at crew member level.

For the recruitment process, Interview is the most crucial part for McDonald’s potential employee. From the short listed candidate, McDonald’s call for interview with area manager or store manager at their flexible nearest branch. From the face to face interview, interviewers can know about applicants, such as behaves, confidence, knowledge, and basically how the applicants come across as a person. At crew member level, interview is the second step for applicant.

At this step, short listed candidates involve selecting a small number of applicants for the next stage. This selecting process will be carried on until the right numbers of candidates are found with the wanted quality. As a result, the recruiting team can be able to distinguish easily the strong applicants from the weak hundreds of applicants. McDonald’s inform to the successful candidate over mobile phone or by email within one week. One day, McDonald’s arrange an induction for a new employee that may turn him or her into a long term, loyal member of staff. In the McDonald’s, the induction process begins even before the candidate is offered the job.

Training and Development

From the initial training that is called skill training, Employee can know the basic job knowledge of each position and can develop. Moreover, ongoing training program provides a more advanced level of job knowledge and make an economical employee. An ongoing programme of training evaluation enables employees to keep training up to date and according to the demand of the business. McDonalds training and development programme is an important part to the 100% customer satisfaction that the company aims to achieve ultimate goal.

Objectives of the Training and Development

McDonald’s arranges training and development programme for many reasons. For example, training and development programme may be introduced to:

Increase job satisfaction and motivate employees, as a result, reducing absenteeism and labour turnover.
Reduce wastage and accident rates by gaining a excellent performance across the workforce.
Develop the skills of existing employees to cope with labour shortages
Establish the most effective and efficient working methods in order to maximize productivity and remain competitive
Use of new equipment and the application of new technology.

Achievement of the Training and Development

We believe that training is the foundation of any success and McDonald’s think so. Immediately, training begins with a one-hour orientation in McDonald’s. Each branch of McDonald’s has its own video player and training room. Step–by –step manuals and video tapes are played every detail of the operation. So, McDonald’s is dedicated to the training and development of all their employees, providing career opportunities. So, it is an ongoing process of all McDonald’s employees – it is everyone’s job, every day. According to the employee position, all employees are to receive induction training followed by a structured development program. After completing the initial training, they are to pass one Observation Check list (OCL) in the particular area with successfully.

In McDonald’s has 21 days of employment probationary period. During this probationary period, employee’s performance is to evaluate. For example, standard of work, personal attitude, teamwork, focus of customers, hygiene and etc. After completing the probationary period, they must be achieved a competency rating of satisfactory. If they fail to meet the standards of required of performance, they can be terminated at any time during their probationary period.

Human Resource Planning

To achieve the McDonald’s goals, human resource planning is concerned with getting the right people, using them perfectly, and training and developing them. In order to meet McDonald’s aims and objectives successfully, people using are to identify perfectly and effective way and to identify any problem that are likely to occur (such as recruiting the best candidates) and then getting with proper solution.

”Human Resource Planning (HRP) is the process of ensuring an organisation has the correct staff at the right time, with the right skills and abilities in the right place.”

Objectives of the Human Resource Planning

The aims of Human Resource Planning (HRP) are to

create the best use of human resources
look forward to the problems with surplus staff
build up a well trained and flexible workers
decrease organisation’s dependence on outside recruitment agencies

Achievement of the Human Resource Planning

Like all other businesses, for daily activities carry out McDonald’s need the assistance of staff. All the important number of staff in McDonald’s fulfils a key role in its operation. Without sophisticated technology McDonald’s would not be successful, for setting up properly human being are responsible.

If the manager of McDonald’s do not select the potential employees in careful way and do not match against the ability of post that means who are unsuitable, it can create a number of problem, for example

poor productivity levels
no good feeling among staff
job dissatisfaction
high absenteeism levels
customer complains
dismissal
replacement

For demand of labour, McDonald’s analysis its future plans and estimate the levels of activity within McDonald’s. As a result, they can predict that the organisation has right number of potential employee with right quality.

The external labour market is very important for any organisation because of it can make up of potential employees, locally, regionally, who have the right skills and qualification necessary at any time. For McDonald’s, local unemployment figures are very important who give the indication of the general labour availability required at that time.

Also, Human resource planning of McDonald’s includes searching at how labour is organised within a business or an organisation.

Theoretical Models of Human Resource Management

Generally, human resource strategy is performance or behaviour based. In addition, employees are a main resource for any service organisation.

”Organisations which successfully manage change are those which have integrated their policies with their strategies and strategic change process.”

Johnson & Scholes (1992) Exploring Corporate Strategy

There are a lot of models associated within an organisation. Such as

The Fombrum, Tichy and Davanna model
The Harvard model
The Warwick model
Guest’s model and etc

Two models are explained among them as below:

The Harvard model

According to the Harvard Model, “HRM polices need to derive from critical analysis of: the demands of the various stakeholders in a business and a number of situational factors”

Hannagan Tim, 1995

Employees are variable and valuable for any organisation. Moreover, organisations are owned and operated by various employees or people (stakeholders), the task of the management is to balance the returns to every person involved. The Harvard model emphasises on the importance of integration HR policies with business objectives. The Harvard Model is emphasised as the ‘soft’ approach to HRM, employees like stakeholders of the company. In this model has discussed four areas such as, reward system, employee influences, human resource flow, and work systems, there are also included situational factors such as influence of trade unions, labour market, and laws. These are also relevant to the theory. The effectiveness of the HRM is related to the four ‘C’s’ with the theory. The four ‘C’s’ are “Commitment, “Competence, Congruence and Cost-effectiveness. According to the Harvard theory, employees are an asset rather than a cost.

In McDonald’s, line managers are responsible for people and store managers are responsible for the day to day running as a mixture of both the hard and soft approach. McDonald’s believes that Staffs are an asset for them which are shown by training provision and should provide long term investment for the company.

Guests Model

Devid Guest has developed his model based on the Harvard model and included four outcomes which has developed into four policy goals

Strategy integration
Commitment
Flexibility
Quality

Guests (1987) theory, is also included in McDonalds policy, Guest believes the organisation should, “aim for high level of commitment from staff, obtain high quality output, continually improve standards, flexibility from staff, no fixed job definitions, working practices and conditions and seek strategic integration through HR policies.”

In McDonald’s, improving standards continuously and flexible working times offering for staff. Line managers of McDonald’s accept HRM policies and combine them into strategic plans, staff changing roles are allowing within the organisation.

Acknowledged as culture, every organisation has different values, ideas and beliefs that affect the way they operate.

According to Handy, there are four types of culture; Power, Role, Task and Person. McDonald’s culture combines two of these. Top management of McDonald’s reflects ‘power’ culture that makes the overall decisions and allowing rapid response to decisions. Other one is the ‘task’ culture. In McDonald’s, the overall aim of the organisation is task orientated and focussing on team culture, and powerful communication between all levels of staff.

McDonald’s has integrated the contingency approach by considering the environment culture. Contingency approach suggests that ”different problems and situation require different solution”, for making a sound solution, need both internal and external influences ‘fit’ together. Also, this approach influences promotions of staff that comes from the company means McDonald’s restaurant. It is very helpful for the staff and staffs are offered appraisals that means increasing job opportunities for staff.

Task-3
Performance management

For the successful practices of people management, performance management is a holistic process which bringing together many elements of the organizational function. It includes particular learning and development, and gives an overview of employee status.

Performance management is establishing a culture where individual and groups get responsibility for continuous development of business process and their own contributions, skill and behavior. Performance management is about interrelationships and improving the quality of relationship between manager and individual, between manager and teams, between members of teams, and etc. So, McDonald’s believe that it is a joint process, not a one off-event, not just managers, apply to all employees. Therefore, McDonald’s business manager can make clear what they look forward to individual and teams to do. For example, how they should be managed and what they need to do their jobs.

Human Resource performance monitoring

McDonald’s follows the ongoing performance management for employees. For example, setting goals, monitoring the employee’s accomplishment of those goads, contributing feedback with the employee’s, evaluating the employee’s performance, rewarding performance or firing the employee. Performance management includes frequent activities to establish organisational aims to achieve those goad more effectively and efficiently. McDonald’s believe that the best approach to accomplish ‘value for money’ is to monitor the performance levels of staff and want to reduce wasteful actions.

Effective Human Resource Performance

By monitoring improvement, depa