Psychophysical Methodologies

Psychophysical methods are the tools for measuring perception and performance. These tools are used to exhibit basic perceptual processes, to analyze observer performance, and to specify the required characteristics of a display. Psychophysical measurement is often defined as a measurement of behaviour to highlight internal processes. The experimenter typically does not surround his/her focus in the actual behaviour, which could be as simple as ‘pressing a button’.

“an exact science of the fundamental relations of dependency between body and mind” (Fechner, 1860). Average sentence length 15-20 words

Psychophysical Methodologies

Psychophysics sets out a range of main methods to measure sensory thresholds. This is known as ‘Absolute threshold’ which scales out to see the smallest stimulus intensity that can be perceived. You then also have ‘Difference threshold’, to look at which of the two stimuli that can be perceived is smaller. When methods such as these are conducted they would be constructed to the individual in the description of ‘How intense’, ‘beautiful’, ‘likable’ is this object / thing. It objectively aims to demonstrate physical properties of the external world to a person’s experience of it. These are viewed as the link between external world and the inner world experiences. This depicts clearly the methods that are directed at painting an accurate depiction of the sensory capabilities. Green (1993b) has developed a maximum likelihood technique by which a stimulus dimension can be sampled with high efficiency and that yields valid threshold estimates in as few as a dozen trials, using a yes-no (single-interval) procedure. This method appears to have promise for studies of large stimulus domains in which stimuli yield.

Preferential Looking/Vision

By placing the psychophysical techniques in the context of a forced-choice psychophysical procedure, it allows behaviour to be examined from a quantified perspective (Teller, Morse, Borton, & Regal, 1974; Peeples & Teller, 1975; Regal, Boothe, Teller & Sackett, 1976). This technique follows Fantz’s (1965, 1967) preferential looking (PL) technique, combined with a forced-choice approach to data collection (Blackwell, 1953; Bush, Galanter and Luce, 1963). The variant of the PL technique may be called forced-choice preferential looking (FPL). An infant is held facing a stimulus display (Teller et al., I974). A visual stimulus, such as an acuity grating, is presented in one of two possible positions, left or right, on each of a series of trials. The surrounding visual field is a medium grey, chosen to match the acuity grating in average luminance. Thus, if the stripes are fine enough to be invisible to the infant, the acuity grating will match the screen in brightness and the infant will have nothing to see. On the other hand, if the stimulus is visible and attractive, the infant will usually stare in the direction of the stimulus. As in Fantz’s technique, an adult observer is located behind a peephole at the center of the screen. The observer’s task is to use the infant’s eye and head movements, and staring patterns, to infer or guess the location of the stimulus.

Auditory/ Speech Perception

Auditory psychophysics or psychoacoustics has been concerned with measures of absolute sensitivity, masking, and discrimination between sounds. These can differ in frequency content, intensity, duration, and spatial location. Other studies have compared different identification tasks, such as binary classification, numerical rating scales, absolute identification, and perceptual distance scaling (e.g., Ganong & Zatorre, 1980; Massaro & Cohen, 1983b; Vinegrad, 1972). The psychophysical methods applied in the study of speech perception are essentially the same as those applied in research on auditory, visual, or tactile perception of non-speech stimuli. Indeed, the generality across different stimulus domains and modalities of Weber’s law or the law of temporal summation has been an important discovery. Such laws are in accordance with behaviourist and information-processing orientations in psychology, which assume that perception and cognition are governed by general-purpose, domain-independent processes. The description of such processes is an important part of psychological research. Measurement of the absolute hearing threshold provides some basic information about our auditory system.

Auditory perception skills in infancy have been shown to be predictive of language outcome at 3 years of age (Benasich & Tallal, 2002). However, it is essential to investigate concurrent associations between auditory processing skills and language in the early school years. A study was designed to compare frequency discrimination abilities for forward and backward masking tasks in children aged 5-7 years and to investigate the relationship between auditory processing skills and language. Based on Sutcliffe and Bishop (2005), it was hypothesised that children’s frequency discrimination thresholds would be lower for forward masking than for backward masking. It was also hypothesised that children with etter (lower) frequency discrimination thresholds would score higher on a language assessment than those with poorer (higher) frequency discrimination thresholds. The role of nonverbal ability in auditory processing has been investigated in only a few studies, with inconsistent findings. McArthur and Bishop (2004b), for example, found non-significant differences on nonverbal IQ between groups with poor and good frequency discrimination. In contrast, Deary (1994) found a small but significant relationship between frequency discrimination and both verbal and nonverbal performance scores.

Infants vs. non-infants

Non-infants have also been shown to have excellent brightness discrimination capabilities and at least dichromatic color vision (Peeples & Teller, 1975), although their color vision may well be more limited than that of adults (Feller, Peeples, & Sekel, 1978). Infants respond to spatially sinusoidal striped patterns of various frequencies and contrasts, with the result that infant contrast sensitivity functions (CSF) can be explored (Atkinson, Braddick, & Braddick, 1974). Infant macaque monkeys can be tested with PL (Fantz, 1967) and FPL techniques. With FPL, newborn pigtail macaque monkeys demonstrate visual acuity closely resembling that of human newborns, and improve over the first few postnatal weeks in a fashion entirely comparable to the improvement shown by human infants over the first few postnatal months (Teller, Regal, Videen, & Pulos, 1978). These studies of normal development also provide an interesting context for visual deprivation studies (e.g.,von Noorden, 1973; Wiesel & Hubel, 1974; Regal, Boothe, Teller, & Sackett, 1976), and suggest that the critical or sensitive period during which the presence of visual stimulation is necessary for normal visual development coincides with the time during which acuity is emerging in the normally reared animal.

2016-11-18-1479436385

TG/Travel UK organisational structure: online essay help

“The term organizational structure refers to the formal configuration between individuals and groups regarding the allocation of tasks, responsibilities, and authority within the organization” (Galbraith, 1987, Greenberg, 2011).

TG has a divisional organisational structure that is further split into functional structures. This is based on the assumption that Travel UK is an example that represents the remaining parts of TG in terms of structure. Travel UK has divisions (Airline, Commercial and Customer Operations) which have individual functional departments assigned to them. Divisional structures are organised by products or locations rather than functions (sales, finance, etc.). Divisional structures are decentralised, giving authority to the managers of the individual divisions to allow for well informed decisions to be made by the specialised manager overseeing the division (Fouraker and Stopford, 1968). Large organisations face complex issues due to global markets, multiple interdependent business activities and cooperation with other organisations. These factors require for complex decisions to be taken quickly. (Mihm, Loch, Wilkinson, and Huberman, 2010). This, combined with the rules and processes in place at Travel UK to guide managers in making such decisions objectively form a relatively secure foundation for TG’s volatile environments it is operating in. The divisional structure allows quick adjustment to factors impacting the operation. “Many corporations have developed and organisational structure consisting of relatively autonomous business units to achieve clear focus of skills and effort towards different markets, plus clear accountability of managers” (Hewitt, 2003) providing further evidence for the divisional structure being the most appropriate for globally operating TG. Another advantage is that the divisional structure’s autonomous nature tests and trains the division heads’ capabilities which enables the development of General Managers (Fouraker, Stopford, 1968). A disadvantage of this structure is that knowledge can be contained within each unit limiting the sharing of expertise among the organisation (Steiger, Hammou, Galib, 2014). Each operational division having its own sphere of competence make it likely for this to be the case within TG.

Each division within TG/Travel UK has its own business support units divided by function (sales, HR, etc). Functional structures give each local business unit direct access to areas of expertise but this type of structure can foster a ‘silo mentality’ in which all the departments work for themselves and do not communicate with each other (Connor, McFadden and McLean 2012). TG has made the decision to merge some functions where duplication existed. According to Mintzberg, many other organisations have made the decision to accept functional duplication to make the divisions less dependant on one another (Steiger, Hammou, Galib, 2014) which gives greater market robustness compared to the shared services model (merged functional departments).

TG’s structure is also impacted by employee relations (ER) aspects. “ER is the process of managing both the individual and the group in terms of contracts, regulations and collective behaviour…” (Purcell, 2012). This includes the differences in conditions of employment among and within TG’s business units due to mergers with other organisations. One of the main reasons for unsuccessful mergers is poor integration. One of the motivations to harmonise conditions of employment following mergers (vertical integration) is that work of the same value needs to be compensated in a similar way in accordance with the Equality Act 2010. Not harmonising terms of conditions of employment may increase the risk of legal claims under equally pay, discrimination and employment protection laws (Suff, Reilly, 2007). Towers and Perrin (2003) explain that “disparate benefits and compensation policies need to be integrated to align the company’s employees with the senior management teams business objectives”. The delay of aligning terms and conditions of employment may result in increased staff turnover (Suff, Reilly, 2007). Mergers provide an opportunity to revise practices and policies. Not revising such policies and practices can have a negative impact on organisational capability (PWC, 2016). If following a merger, ways of working are different, it can create frustration and anxiety leading to additional turnover (Stafford, Miles, 2013). Further complexity to this issue is added by the the different conditions of employment and working practices exist not only in job descriptions but also in agreements that have been negotiated by the trade unions with TG. Such differences are described as “monumentally difficult problems” as they are covered under the UK’s Transfer of Undertakings (Protection of Employment) Regulations 2006 (TUPE) and “involve some forceful negotiating from any unions” (Levinson, 2014). TUPE is designed to to protect employment rights for workers who are being taken over by a different organisation. Not following the process can incur compensation and legal fees as in the case of the Ministry of Defence which paid £5,000,000 in an out of court settlement to 1,600 Unite (union) members) (Stevens, 2014).

Changing such practices would therefore be complicated due to the seemingly inadequate cooperation between TG and with the trade unions. Whilst the organisation holds monthly joint consultative meetings with the unions, the relationship between them appears to be strained. This is evident in Travel UK’s fear of strikes when making changes to the cabin crew hours and working practices. An example of the impact that a general lack of trust between the unions and organisations can have is the one of British Airways and the union Unite in the years of 2010/2011. In 18 months the cabin crew went on strike four times over issues such as working conditions, redundancies and benefits resulting in revenue loss and customer complaints (BBC, 2011).

TG’s current divisional structure is is only appropriate for the future if TG eliminates all factors that could hinder it from reaching its strategic goals. Some of them are:

Customer satisfaction: The strained relationship with the trade unions can result in reoccurring industrial action which can delay or stop services causing customer complaints.
Profit: The underlying fear of strikes and differences in terms and conditions of employment create an environment in which TG cannot react quickly to changes needed to remain profitable.
Staff engagement: The fractured conditions of employment and working practices among its business units can have a negative impact on engagement levels.
Sustainability: Different work practices among the organisation make it difficult to reach a common goal.

If these barriers are not removed, the structure of TG is not appropriate for the future. This statement is supported by the CIPD’s 18 key points for high performance and high commitment in workplaces. Two of them are outlined as commitment to single status for all employees as well l has holiday harmonisation (Tamkin,2004); both of which are currently not met within TG.

2016-12-27-1482850246

Importance of Human Rights approach to care

The Lunacy Commission was set up following the 1845 Lunacy Act. This Government appointed group of lawyers and doctors oversaw the conditions of the asylums in England and Wales. Appointed commissioners would visit the hospitals twice a year their main objective was to ensure that the hospitals ran safely and efficiently particularly in regards to treatments. The reports raised concerns particularly with Patients being certified insane, suicide prevention and excessive force when restraining/subduing those in the asylum. The report highlighted that a good amount of furniture had been provided for the use of inmates.

The government took over the building of asylums eliminating private enterprise.

The government passed legislation to regulate activities in 1845 and 1853 Hospitals were also registered for the first time. In 1890 the Lunacy act of 1890 was passed in response to public concerns that some patients were being wrongfully detained particularly women who had very little rights, women who were wealthy could fall victim to financial abuse through private arrangements her husband could have his wife certified insane and locked away clearing the way for himself to inherit any financial benefits. Although great strides were made protecting the rights of inmate’s privacy was minimal there was usually 50 inmates to any one ward, privacy was minimal, unit wings were closed to patients with each ward/wing being placed under lock and key.

1961 is considered as the year attitudes towards for institutionalized mental healthcare was changing. Enoch Powell had been appointed health secretary in 1960 and was given the task of reforming the nations crumbling health services, including the Mental Hospitals. During the conservative party conference in March 1961, Powell criticized the asylums.

He spoke of the transition to community based care, the horror that asylums inflicted on patients, Enoch Powell proposed a radical vision towards community health care, not only reducing costs to the state (Powell envisioned 15,000 fewer psychiatrists) and reducing psychiatric beds by 50%. Powell’s next point was regarding those he described as the ‘sub-normal’, and the requirement to assess their needs and develop a more concise understanding of the issues faced in managing and caring for those individuals. Powell also advocated community services and better co-operation between local authorities and medical staff. Powell believed that services should be more flexibility in regards to the services offered and the services should be more person-centered fitting the services to suit the induvial needs and rights.

Under the 1990 National Health Service and Community Care Act any adult aged 18 or over who is eligible for and requires services from their local authority have the right to have their needs assessed and be fully involved in the arrangement of services advocates also can play a vital role in ensuring their opinion is respected, and any plan is implemented to enable individuals to live as independently as possible the community care act of 1990 emphasized the importance enabling and tailoring support and services around the needs of individuals. Assessments are reviewed every year, unless there is change of circumstances or the individual or local authority feel another review would be beneficial.

Importance of Human Rights approach to care

The human rights approach within care allows support workers to “realize the potential” of service users, highlighting the importance holistic care and taking a holistic approach to care it emphasizes the importance of taking someone’s emotional, physical, spiritual needs of an individual. Although a support worker may care for the physical needs of a patient due to any physical disability, such as assisting with washing or preparing meals a human rights approach advocates the importance of taking into consideration dietary requirements such as the individual being a vegetarian or observing religious traditions such as a Muslim or Jewish service user not consuming pork. The service provider would also not discriminate against any service user because of their religion, sexual orientation or any criminal convictions. This allows services to feel comfortable with services available and can raise complaints or seek support without fear of reprisals or intimation, this not only promotes the individual’s dignity but also their human rights.

Underlying principles of Human Rights

The underlying principles of human rights particularly in relation to care can be broken down into the acronym PANEL.

Participation: Service users should as far as possible participate in the review of their care especially when their needs are being assessed and services are being allocated to offer the appropriate support that the individual requires.
Accountability: Services are held accountable by governmental appointed agencies such the care inspectorate in Scotland Set up by Scottish Government, and accountable to ministers, they ensure assure and protect everyone that uses these services. Using the person-centered approach to care and having a holistic view of care allows the service users understand what is expected of them when they receive support and services.
Non-Discrimination: The service provider would also not discriminate against any service user because of their religion, sexual orientation or any criminal convictions.
Empowerment: Services should empower the individual to make their own choices, such as how they dress, plan activities that suit their individual tastes and choices emphasizing individuality and choices.
Legality: Service providers/Employees follow the law and strictly enforce the recommendations ensuring the safety of service users and employees are strictly adhered to reporting any cases of assault, abuse or offences.

My Practice’s Human Rights approach

Enablement: I assist the service user to make and achieve their goals but they don’t do it for them they encourage them to fulfil them themselves with the support needed. It is important that the care worker works in partnership with a range of integrated services, such as occupational therapist, to assist in meeting the service user’s needs. I will try to uphold the service user’s independence and encourage them to achieve their goals. I ensure that I encourage the service user whilst supporting them rather than doing it for them.

Non-Discrimination: Service users have the right to live in an environment free from harassment and discrimination so it is important that the care worker considers all factors so that all their spiritual, cultural and religious needs are met. Service users have the right to complain without being discriminated against if they have not received the care that they are entitled to or if they have been discriminated against. I aim to treat the service user with equality and diversity. I will ensure that I do not discriminate against any of the service users and I will embrace the diversity of the service user’s disabilities, sexual orientation and religious beliefs and focus on their induvial needs rather than their lifestyle.

2016-12-27-1482863432

How did Watergate deepen the mistrust in the office of the President?: essay help

On August 9, 1974, 37th President of the United States of America Richard Nixon resigned from his executive post. Nixon was and still is, the only US President to ever resign. The Watergate Scandal had brought Nixon’s second term to an abrupt end and diminished of retaining some form of respectability and honor among not only Americans, but citizens of the World. Although the questions about Watergate still remain. One in particular is had the Watergate scandal exposed a logical problem requiring structural resolutions, or was it the unfortunate combination of a poor president and his unethical advisors? Essentially, how did Watergate deepen the mistrust established in the office of the President and in what ways did this affect America.

Statistically, Americans are profoundly unhappy with their government. While the majority of Americans feel proud to be American; in the 1990s, never more than 40% of Americans said that they trusted their government most of the time or just about always (McKay, Houghton, & Wroe, 2002, p.20). A evident majority think that politicians do not act in the best interest of the people, and believe that government is controlled by investments from corporation. During the Watergate scandal, Americans had been shocked by the crimes of the Nixon Presidency. Investigations by the press and congress had exposed previously unimaginable levels of corruption and conspiracy in the executive branch. Following Watergate, the publics faith in government had been shaken, since the assassination of President John F. Kennedy, the trust placed in government had been in decline. The assassination had stolen the remainder of President Kennedy’s life and deprived him of a impartial, balanced historical judgement. Watergate had done the same to Nixon, and taken the same opportunity for a fair assessment from him, although Nixon himself had pushed it to happen. In order to fully assess how Watergate damaged the trust placed in President Nixon, his whole Presidency needs to be evaluated; domestic policy, foreign policy, if Watergate was really to blame for this mistrust, or was the mistrust already there and Watergate had just agitated it.

In Monica Crowley’s 1996 book Nixon off the Record, President Nixon brings up some points for consideration which not only challenge Watergate, but question it’s actual Impact of the scandal. ‘As President, until Watergate, my approval polls were never really below 50%. Neither were Eisenhower’s’ (Crowley, 1996, p.115). The significance of this is that Nixon refers himself to Eisenhower, one of the highest regarded Presidents in modern history. Nixon’s Domestic policy involves not only his own policies but the policies of who came before him; President Lyndon B. Johnson’s Great Society was a war on Poverty and both, racial injustice and gender inequality. Some of the policies were carried-on by Johnson, as part of President Kennedy’s New Frontier Legacy. The Civil Rights Bill that JFK promised to sign was passed into law. The Civil Rights Act banned discrimination based on race and gender in employment and ending segregation in all public facilities. Yet, African Americans all over the country were still denied the right to Protection from law enforcement, access to public facilities, and fair financial prospects. Nixon saw this as unjust abuse of the system, calling it both unfair to African American’s and a waste of human resources which would benefit America’s development. Johnson also signed the Economic Opportunity Act of 1964; The law that created the Office of Economic Opportunity aimed at attacking the roots of American poverty. Although this was dismantled under Nixon and Ford, which allocated poverty programs to other government departments. Johnson’s popularity had dropped due to Vietnam; members of his own party were seeking the nomination for President and in March 1968, he announced to the people of the United States that he would not seek a second term. Despite criticism, under LBJ the Great Society did impact many of the poorer Americans the program was aimed at. The total number of Americans living in poverty fell from 26 percent (1967) to 16 percent (2012), Government action is literally the only reason we have less poverty in 2012 than we did in 1967 (Matthews, 2016). The Great Society was however, deemed ahead of its time, combining both the this and the Vietnam war created massive budget deficits and thus, as Howard Zinn neatly puts it, Johnson’s War on Poverty in the 60s became a Victim of the War in Vietnam (Zinn, 2005, p.601). Nixon’s major economical objective was to decrease inflation, by doing so he had to effectively end the Vietnam War. This he did not do, in fact, he expanded it despite announcing on December 8th 1969 that the war was soon to end due to ‘a conclusion as a result of the plan that we have instituted’ (History.com, 2009). While ending the war was not something Nixon could do instantly, the U.S. economy continued to helplessly fluctuate during 1970, this in turn resulted to a very poor performance from the Republican party in the midterm elections – The Democrats held major seats and was heavily in control throughout Nixon’s presidency.

His Presidency was not completely shadowed by Watergate, although it has stained his Legacy, looking beneath the surface of Nixon’s Administration, his domestic policy clearly impacted America’s poorest; Total domestic spending by the federal government rose from 10.3 % of the gross national to 13.7% in the six years he was President. Granted a portion of the increased domestic spending under Nixon was due to the delay in starting Great Society initiatives, but a lot of it was due to Nixon’s own plans. The New Federalism agenda, essentially pointed out that all others before Nixon had failed to impact, let alone solve both social and growing urban problems. His new federalism has been credited as a highlight of his presidency, “Nixon’s New Federalism provided incentives for the poor to work” (Nathan, 1996). Despite his efforts, Nixon could not take away the feeling from the American People that the American Dream was failing following the Assassination of all the Major Civil Rights figures. John F. Kennedy, Martin Luther King, Malcom X, and Robert F. Kennedy all within the space of 5 years. Upon this, the process of desegregation was also taking place in many southern states, which created an immense amount on tension between minority groups and whites. Although Nixon was for desegregation, many traditional Right Wing Republicans in the southern states would have felt very different about this matter and thus, alienated by the Nixon administration.

Some have the opinion that it wasn’t so much Nixon that created or perpetuated this aura of Mistrust in his office, as it was the Government Agencies that served him. It is believed that a number of federal services contributed to this mistrust, The CIA was secretive and faceless in a sense but the FBI took on a more public role, taking credit for their actions and influencing the press, on numerous occasions. FBI Director J. Edgar Hoover morphed the FBI into what Richard Gid Powers called ‘ one of the greatest publicity generating machines the country has ever seen’ (Powers, 1983, p.95). Americans having a favourable opinion of the FBI fell from 84% (1965) to 52% (1973). This fell again to 37% in 1975. On top of this, The FBI’s creditability was also damaged by Watergate. L. Patrick Gray, Nixon’s nominee after Hoover died, destroyed critical Watergate evidence. The Watergate investigation had revealed that all too often Nixon had used the FBI for political purpose. Kathryn S. Olmsted narrates how federal agencies abused their privilege; Watergate did what the Bay of Pigs had not; ‘it had undermined the consensus of trust in Washington which was a truer source of the agency’ s strength than it’s legal charter’ (Olmsted, 1996, p.15) – it showed that ‘national security’ claims could and would cover up activities which were nothing but illegal. In brief, Nixon’s New federalism was not new, throughout his political career he opposed Big Government programmes and had fought to restore more power to state and local level establishments. President Nixon did achieve a number of things, the restoration of power to lower level government and away from federal jurisdiction, is one example. A number of critics argue that although his domestic policy benefited minorities, the poor and women, his new Federalism, failed to surpass his administration as he fought a losing battle to preserve his presidency following Watergate.

Foreign Policy is where Nixon’s Presidency becomes more believable to have caused mistrust in his office. During his time in office, he and certain federal agencies covered up a number of major mistakes created by the government. The Tonkin Incident is essentially where it began for Nixon, on 2 August 1964, United States claimed that North Vietnamese forces had twice attacked American destroyers in the Gulf of Tonkin. Known today as the Gulf of Tonkin Incident, this lead to open war between North Vietnam and the United States. It furthermore foreshadowed the major escalation of the Vietnam War in South Vietnam. This incident brought Congressional support for the Gulf of Tonkin Resolution, passed unanimously in the house, and with only two opposing votes in the senate. This gave Johnson the power to take military action as he saw fit in Southeast Asia. By 1968, There were more than 500,000 American Troops in south Vietnam (Zinn, 2005, p.477). This Resolution was applicable to Nixon when he was sworn into office. Nixon soon introduced U.S. troop withdrawals but also authorized invasions into Laos and Cambodia. Nixon announced the ground invasion to the American public on April 30, 1970. He expanded the Vietnam war in a time that called for its end, this led to widespread protests across America, and his popularity among younger American’s plummeted after this. Not only was there disturbances from this, it was considered a military failure, Congress resolved that Nixon could not, and should not use American troops in extending the war without congressional approval. Historian Harry Howe Ransom states, ‘[Nothing in public hearings] suggests that Congress intended to create, or knew it was creating, an agency for paramilitary operations’ when accepting the Gulf of Tonkin Resolution (Howe Ransom, 1975, p.155-156). Suggesting that it was Nixon’s own doing that created this mistrust when concerning the Vietnam War. Although, Nixon was not to blame for the entry into the Vietnam war, LBJ took adavantage of an compliant Congress quietly to increase American involvement In vietnam, and so without telling the people what he was doing. LBJ’s time in office, then, saw the emergence of ‘Presidential Imperialism’.

Nixon also introduced new trends in diplomatic international relations for America. Nixon argued that the communist world had two rival powers — the Soviet Union and China. Nixon and close advisor Henry Kissinger exploited the relationship between the two to benefit America. During the 1970s, Soviet Premier Leonid Brezhnev agreed to import American wheat into the Soviet Union. Creating trade and improving the economy. Nixon surprised the nation when he announced that he would travel to Communist China in February 1972, and meet with Mao Zedong. Following this visit, the United States dropped its opposition to Chinese entry in the United Nations and groundwork was laid for diplomatic relations. Just as anticipated, this caused concern from the Soviet Union. Nixon hoped to establish a Détente, in May 1972, he made an equally significant visit to Moscow to support a nuclear arms agreement. The United States and the Soviet Union pledged to constraint the number of intercontinental ballistic missiles each would manufacture. It does seem that Nixon and Kissinger were playing with fire, simultaneously establishing relationships with both China and the USSR, but ultimately, it was a tactical move from the duo. From a foreign policy opinion, it was wise to establish foundations for a diplomatic relationship. However, in terms of domestic policy, the American people were mortified, Nixon had built his reputation as an anti-communist supporter, following this it could easily be seen as nothing more than horrible irony; it was believed that Nixon was inspiring Left-wing enthusiasts to form and act on these international relations. Furthermore, President Nixon is responsible for the My Lai Massacre cover up. On March 16th 1968, a squad of Us soldiers mercilessly killed between 200 and 500 unarmed civilians at My Lai, a small village near the north coast of Southern Vietnam. My Lai was successfully covered up by US commanding officials in Vietnam for well over a year. Nixon, even prior to Watergate was the main culprit in yet another crime, in this case a crime of humanity, one that could have led to his impeachment. In hindsight it is now apparent that the President initiated the corruptive actions against the trials of those found guilty at My Lai – so that no US solider would be convicted of War Crimes (History.com, 2010).

Finally, we reach the crown Jewel of Nixon’s presidency; Watergate. Just as Clinton is associated with Lewinsky, Kennedy with Oswald, Lincoln with Slavery, Obama with Bin Laden and Nixon with Watergate. Nixon in his second term became ruthless with his domestic opponents, he withheld grants and funding appropriated by congress, he often sought to withhold information from congress; Nixon was denied an injunction to prevent the publication of the Pentagon Papers and then later during the Watergate crisis, was forced to release tapes of recording from the White House (Mervin, 1992, p.99). On top of this, he allowed secret missions to spy on his political opponents, this included tapping phones and harassing the liberal brookings institution. This is how the Watergate scandal occurred – initiated by a break-in at the democratic party’s headquarters and followed by a presidential cover up. Eventually bringing to resign in 1974, before he could be prosecuted. The severity of Watergate has been played down in the aftermath of it all, Nixon himself justified it in the worse possible way, that no one in government made financial profit from Watergate (Crowley, 1996, p.215), in this case, Nixon compares his behaviour to previous presidents such as JFK and even, Presidents after him like Clinton. He is very critical of both Executives as he feels Kennedy was just as corrupt during the Bay of Pigs affair. Principally, JFK had not been in effect long enough for anything to take place. The Cuban Missile Crisis was corrupted by Kennedy’s Administration, and the released transcripts were sanitized and passages removed – very similar to what Nixon had done with the Watergate tapes. Clinton, was also a sore topic for Nixon as he had been able to get away with Whitewater. In later years, Nixon felt that he was unfair penalised for Watergate as Clinton was able to evade the repercussions of Whitewater. ‘ Watergate was wrong; Whitewater is wrong. I Paid the price; clinton should pay the price. Our people shouldn’t let this issue go down. They shouldn’t let it sink.’ (Crowley, 1996, p. 219). This was a reference to those who wouldn’t let Nixon forget Watergate and what he had caused. Nixons final comment on Clinton were to have whitewater pursued and Clinton held responsible to what extent was necessary – it would be easy to see how Nixon resented Clinton for his indiscretions, many of which he was able to evade the consquences. Watergate had shattered the liberal consensus, Americans had learned of the covert operations and dirty tricks that their secret warriors had carried out at the height of the Cold War. Following this, the American people had learned about the murderous plots, drug testing, and harrassment of dissesedents that had been carried out in their name. They had been taught a very diluted version of the World. The intelligence investigations forced Americans to face difficult questions regarding the competence of their intelligence agencies, the Executive office of Government, and the tensions between secrecy and democracy. The many inquiries asked them to doubt the decency of Americans they believed to be heroes such as J.Edgar Hoover and John F. Kennedy- and whether their nation truly adhered to it’s professed ideals. It can ultimately be determined that the failures of the American political system-true or false, have undermine trust in the American people.

In conclusion, At the beginning of Nixon’s Presidency it is likely that events in every single Presidency would have added to the suspicion of that Office, Watergate would have had an significant impact on American trust in government. Most Americans are more likely to include factors like Vietnam and Watergate when regarding Nixon as both fit well into the decline of trust, and increasingly negative perceptions of American Political leaders. However, it would be an unfair to put too much emphasis on the incompetence and dishonesty of various presidents and members of congress. Many believed that Ford would restore faith in the Office of President, and trust in the government. Ford was everything Nixon wasn’t: Honest and Open and he received an 71% approval rating shortly after he was sworn into office. – However, in his inaugural address, incoming President Gerald R. Ford declared, “Our long national nightmare is over.” A month later, he granted Richard Nixon a full pardon, by doing this Ford damaged the American optimism, and had shown that he had more loyalty to Nixon and his Party than to the American People. This increased the growing trend of cynicism about the office of the President even after Nixon.

2016-12-15-1481771259

Sexual harrassment of women at the workplace

Introduction

Sexual harassment of women at workplace is a type of violence against women on the basis of their gender. It violates a woman’s self esteem, self-respect and dignity and takes away her basic human as well as Constitutional rights. Sexual Harassment is not a new phenomenon and speedily changing workplace equations have brought this hidden reality to the surface. Sexual Harassment at the workplace has become ubiquitous in every part of the world and India is no exception to the same.

Like any other sex based crime Sexual harassment of the women is about power relationships, domination and control. It is not what most people commonly tend to think like verbal comments, inappropriate touch or sexual assault. It has myriad ways and forms. Moreover its new forms or variables are being introduced every other day in this dynamic technological era. It may include derogatory looks, gestures, indecent proposals, writings or display of sexually graphic pictures, sms or mms, comments about ones dressings, body or behavior and any other unwelcome or objectionable remark or inappropriate conduct.

The ultimate aim of the makers of the Constitution of India was to have a welfare state and an egalitarian society projecting the aims and aspirations of the people of India. The Preamble, Article 14, Article 15, Article 16, Article 19, Article 21, Directive Principles of State Policy and many other Articles have secured social justice for the women thus preventing sexual crimes.

Before ‘The Sexual Harassment of Women at Workplace (Prevention, Prohibition and Redressal) Act 2013 came into force, legislation such as Indian penal Code,1860, The Code of Criminal Procedure,1973, The Indian Evidence Act,1872 have provided protection to women. Also various international conventions to which India has been a signatory and has ratified have filled up the gap until 2013.

The recent Sexual Harassment Act has its roots in the ghastly rxxe of a community worker Bhawari Devi in rural Rajasthan. This incident and the humiliation that followed made the apathy of the system evident. Several women’s groups filed a Public Interest Litigation in the Supreme Court based on which the Vishakha Guidelines were formulated to prohibit the sexual harassment of women at the workplace. Various other judicial pronouncements also paved a way for the formulation of legislation of 2013.

Sexual Harassment is often about inferiority of women. The victim is often confused, embarrassed or scared. She may be clueless about with whom to share with the experience and whom to confide in. Sexual harassment at the workplace may have serious consequences on the physical and mental well-being of the women. It may also degenerate to their gravest form that is rxxe.

There should be proper grievance mechanism at the workplaces to deal with this issue. Also the accused shall be punished without having any regard to their status or position in at the workplace. There should be committees comprising especially of women members to make the victims feel comfortable. Reporting of The incidents should be encouraged and those who dare to speak up must be protected from the wrath of the employers. The employed women cannot be at the whim and fancy of their male employer. The incidents of Sexual Harassment at the workplace are a stigma on our Constitution. If it is not prevented, our constitutional ideals of gender equality and social justice will never be realized.

Rationale

Sexual harassment can have a number of serious consequences for both the victim and his or her co-workers. The effects of sexual harassment vary from person to person and are often dependent on the severity and duration of the harassment. For many victims of sexual harassment, the aftermath may be more damaging than the original harassment. Effects can vary from external effects, such as retaliation, backlash, or victim blaming to internal effects, such as depression, anxiety, or feelings of shame and/or betrayal. Depending on the victim’s experience, these effects can vary from mild to severe.

The rationale behind taking this topic for the dissertation is to throw light on the various aspects of the law relating to sexual harassment thereby helping women to achieve their rights better. Also one of the reasons behind choosing this topic for the dissertation is also to make the employer aware of his liability. Lastly our Constitution has granted us certain fundamental rights and it includes gender equality and social justice. There is a strong relationship between these fundamental rights and the prohibition of sexual harassment at the workplace as sexual harassment is a form of power relationship which treats women inferior.

Scope

Sexual harassment at the workplace results from the misuse of power and not from sexual attraction. The legal scholars and jurists have emphasized that the instant conduct is objectionable as it does not only interfere with the personal life of the victim but also throws a pall on the victims’ abilities.

The victims of sexual harassment may be both men as well as women. This study particularly aims at the sexual harassment of women at the workplace.

The scope of this study is to pave way for the prevention of sexual harassment at the workplace and to make women aware of their rights and complaint mechanisms. Many women are ignorant about the laws which protect them from this kind of harassment. Also many employers shrug off their responsibility to help fight sexual harassment at the workplace. This study aims to discuss the constitutional provisions as well as legislation and employers liability in eradicating sexual harassment at the workplace.

Background

The law to check sexual harassment at workplace which prescribes strict punishment, including termination of service, for the the guilty and similar penalties in case of a frivolous complaint has come into effect from Monday.

The women and child development (WCD) ministry had come under attack for delay in implementing the Sexual Harassment of Women at Workplace (Prevention, Prohibition and Redressal) Act, 2013 which was brought in after the outrage over the December 16 gang rxxe case, despite the fact that it had received Presidential assent on April 22, 2013. Before this Act came into force there was no special law for sexual harassment at the workplace. Some legislations like Indian Penal Code, Criminal Procedure Code etc dealt with this problem.

After the horrifying gang rxxe of Bhawari Devi Vishakha guidelines were formulated to fill the gap. In the year 2013 the Act came into force. Thus this study will help to study how the laws have evolved.

Hypothesis

Like all other historical manifestations of violence, Sexual Harassment is embedded in the socio economic and political context of power relations. It is produced within class, caste and social relations in which male power dominates. It is a sex based discrimination which is dangerous to the well being of the woman. It imposes less favourable conditions upon them. This research tries to build a link between “Every incident of sexual harassment of women at workplace results in violation of the fundamental rights of women and it is the employer’s liability to protect these fundamental rights”.

Research Methodology

Research is a systematic attempt to push back the bonds of comprehension and seek beyond the horizons of our knowledge, some truth or reality. Since the scope of the study is to establish a link between the fundamental rights and a right against sexual harassment at the workplace. The research Methodology chosen will be doctrinal and will seek to elaborate all aspects and get deep knowledge.

The study material will be collected through library visits , various books, periodicals, articles published etc. The technology like computer Cds etc will also be used to obtain and maintain information. Reliable Internet resources will also be used to a limited extent.

Survey of existing literature

The researcher would like to analyze and survey on the various books available on sexual harassment at the workplace, Indian Constitution and various books on the rights of women and their protection.

Aims and Objectives

The main Aims and Objectives to undertake this research can be listed as follows-

● To outline the relationship between fundamental rights and the right against sexual harassment at the workplace.

● To make women aware that right against sexual harassment at the workplace.

is their fundamental and constitutional right.

● To make women aware of the laws and the policies for sexual harassment at the workplace.

● To highlight the liability of the employer to keep the sexual harassment at the workplace in check as protection of women against the sexual harassment is a constitutional and fundamental right.

● To search solutions for the persisting problem of sexual harassment at the workplace.

● To understand the evolution of laws against sexual harassment at the workplace.

● To study the legal facets in protection of the rights of the women.

● To study the theme of legislations and laws which are enacted to prevent sexual harassment at the workplace

Scope

Sexual harassment at the workplace is a serious and ever increasing problem in India. India already has one of the lowest ratios of working women in the world. It would be disastrous if companies, unclear about sexual harassment, take the easy way out by simply rejecting women in favour of men.

It is the liability of the employer to make use of the constitutional articles and the new legislation to protect women against the sexual harassment at the workplace. Sexual harassment in the workplace is one of the most complicated areas of employment law. It is also one of the areas that has recently received the most press. Sexual harassment in the workplace often goes hand-in-hand with other illegal acts, like gender discrimination.

CHAPTERISATION

The Research project is divided into the following 10 chapters for better understanding. The chapters are further divided into subpoints so that the material collected and the study done can be compartmentalized into chapters and sub-chapters. This chapterisation will be able to give a better idea and a better insight into the project. The chapters are systematically numbered and placed one after the other.

Chapter – I – Introduction

Though the Constitutional Commitments of the Nation to women were translated through various planning processes, legislations, policies and programs over the last six decades, a situational analysis of social and economic status of women does not reflect satisfactory achievements in any of the important human development indicators. This chapter will highlight the vulnerable group and how the sexual harassment at the workplace speaks to power relationships and victimization than it does to sex itself. Also how sexual harassment is a form of sexual discrimination and subordination.

Chapter – II – Extent and Types of Sexual harassment

This chapter will enumerate the extent of sexual harassment at the workplace specially in India. It will also speak about the types of sexual harassment at the workplace which includes 1) Quid Pro Quo i.e “this for that” which means the employer or the superior at work makes tangible job related consequences such as promises of promotion, higher pay etc. upon obtaining sexual favours from an employee and 2) Hostile Work Environment which means an abusive working environment.

Chapter – III – Analysis of Statistical Data

In this chapter statistical data will be collected from reliable sources. The data will be analyzed and proper conclusions will be arrived at. This chapter will show the numbers and may show the gravity of the problem.

Chapter – IV – Vishakha Guidelines

Till Vishakha guidelines there was no civil or penal laws in India to protect women from sexual harassment at the workplace. The brutal gangrxxe of Bhawari Devi gave rise to Vishakha Guidelines which filled up the vacuum. This chapter will cover up the historical background behind Vishakha guidelines and important features of the same. Vishakha Guidelines began a new era in the legislations for c

Chapter – V – Judicial pronouncements

The issue of sexual harassment at the workplace is such a complex issues that a simple understanding of it is a tedious and tardy process. Therefore the best way to understand it is to see the trends in the history of the precedents of the Courts. The famous cases of Vishakha, Rupan Deol Bajaj, Shehnaz Mudhbalkhal, Medha Kotwal Lele’s Case will be covered in this chapter. Also recent cases of Tarun Tejpal and Justice AK Ganguly will be studied in detail. This chapter will trace the judicial inclination of the decisions.

Chapter – V I – Legal Framework in India-The Constitution

The Constitution of India gives equal protection to men and women. Gender equality is one of the ideals enshrined in our Constitution. The Constitution has even positively discriminated in favor of women. In this chapter various Articles in the Constitution will be discussed which include Art 14, Art 15, Art. 21 and many other Articles which ensure protection to women. The Constitution is the mother of all Laws and hence all other legislations have emanated from this. Thus this chapter will be important as it will cover how Constitutional ideals and Fundamental Rights enshrined in it have given rise to various laws protecting women.

Chapter – VII – Legal framework in India- Criminal, Labour and other laws

The Sexual Harassment of Women at Workplace (Prevention, Prohibition and Redressal) Act, 2013 has come into force in 2013. Before this various other criminal and other laws protected women from sexual harassment at the workplace.

Thus in this chapter all these other laws which have sourced the formation of the Act of 2013 will be discussed in detail. Also how the Constitution and these other laws have aided the formation of the Act of 2013 will be discussed.

Chapter – VIII – The Liability of the Employer

This is one of the important chapters as it will discuss how the Employer must take care to prevent the occurrences of the incidents of sexual harassment at the workplace in his institution. Also it will depict how the Employer can make use of the laws and the Act of 2013 to ensure that the incidents do not occur and if the incidents occur then how does one tackle with it, legally and otherwise.

The Employer should create a healthy environment at the workplace and the accused should be made subject to the laws irrespective of their positions in the institutions.

Chapter – IX – An Analysis of the Act

In this chapter, The Sexual Harassment of the Women at the Workplace (Prevention, Prohibition and Redressal ) Act,2013 will be discussed in detail. Its objectives , Comp;aint procedures, Inquiry, compensation, Punishments etc, such features will be enumerated. The Act is the most important tool for battling sexual harassment at the workplace. Thus this chapter will show how this Act can be best used to the benefit of the women at the workplace.

This Act is a result of lot of struggle and wait. This Act should be used in such a manner that it will prevent as well as eradicate sexual harassment at the workplace.

Chapter – X – sexual harassment at the workplace- International Scenario

India has been a signatory and has ratified many international conventions which give special rights and protection to the women. Its obligatory on India’s part to ensure that women are protected equally. In this chapter various international conventions like CEDAW etc will be discussed in detail.

Also this chapter will discuss how these international enactments have acted as a source for the legislations in India.

Chapter –X I – Sexual Harassment at the Workplace prevention policies

This Chapter will enumerate how the prevention policies must be formulated and how the policies must be best used to prevent sexual harassment at the workplace.

Chapter – XII – Conclusion

This chapter will contain conclusions drawn upon the findings of the research. The conclusion is the most important part of the research as it sums up the hwole of the research and gives a good insight into it.

The protection of the rights of the women in India has always been upheld by the Indian Constitution and Law makers. Women are given a place of dignity in all the legislations. Since in India women were suppressed since the ancient times, Legislators took special care to involve and protect women in the mainstream world.

The conclusion at this stage can be derived that women in India are protected by Constitutional and Fundamental rights. The other legislations have their source from the constitution. Thus it is a Constitutional mandate to protect women from the sexual harassment at the workplace.

It is the duty and responsibility of the employer to uphold the rights of the women in his or her institution. Also the recent Act of 2013 must be implemented properly.

Chapter – XIII – Suggestions

This chapter will contain the suggestions for the victims as well as the employer. It will also contain suggestion for prevention policies and duties of the employer. The suggestions will include how one can strike the balance between constitutional rights and rights of women.

2019-1-5-1546672284

Structure and issues of race within the international system/relations

“The problem of the twentieth century is the problem of the color line.” (Du Bois, n.d.)

Race has been at the epicenter of everything and it propagated throughout centuries in several forms; forms such as economics, geography, education, health and also, socio-politics. This essay discusses and explains the structure and issues of race within the international system/relations; its evolution and development, how it impacts nations and its populations, and nonetheless the elements of race and colonialism in structures of power; followed by the formation of the successful and long-lasting Eurocentric modern capitalism, which is still present in society and acts as a pattern of global hegemon (LeMelle, 1972).

As a definition of a grouping of humans by analyzing their characteristics, either physical or ethnic (En.wikipedia.org, 2018), or a contribution to and a product of stratification (LeMelle, 1972), race has conditioned and influenced many people on the globe, including its governance and leadership. From a socio-constructed conception (En.wikipedia.org, 2018) to a major and predominant constraint in the global order and politics, the ‘race’ denotation itself evolved, building a hierarchy between civilians and nations across the world: White vs Black, Asian and other ethnic groups (BAME). This division led to a different approach of how human beings perceived themselves, strengthening aspects such as levels of development, civilization values, history, religion, culture and traditions, physical features, garments and mainly, color (Jacques, 2003).

Race portrayed and still portrays a significant role in the world order; With its hierarchy status more than solidified – claiming whites as the dominant class and non-whites as the subordinates-, it easily breads racism, discrimination, inequality, and conflict too, perpetuating the ideology of a ‘White Man’s World’ (LeMelle, 1972).

This expression was implemented and widely spread by Europeans with the intent to classify and divide populations according to their ethnicity and backgrounds. With voyages of discovery, colonialism, slavery, and imperialism perceived as great sources of income and prosperity, it became easier for Europeans to strive with their sense of white supremacy and go beyond borders to achieve these hateful and money-driven causes (LeMelle, 1972). Even though, colonialism and slavery are over per se the disparity amongst people, in modern society, is overwhelmingly large (Nkrumah, 1965); race is a structure that conditions and influences the power and actions of actors in the international realm and, in fact, remains beneficial for the Transatlantic couple, the EU and the US. It is then obvious that South American, Asian and African civilizations were reduced solely to their post-colonial and inferior identities and that the conception of a modern and civilized Europe still lingers powerful and wealthy, alienating others from their participation in historical, cultural and financial donations onto the international system (SHILLIAM, 2011).

So how can this term have a credible structure of correlation and such an effect on the international system as we know it? The three antecedents, mentioned priorly, generated a meaningful advantage to the Transatlantic couple in regards to the rest of the globe – Big decision makers with IGOs such as UN, WTO, IMF, NATO or World Bank; leaders of renown institutions such as banks, universities and hospitals; predominant winners of warfare in events like WWI, WII and Cold War; huge influencers in aspects of culture, law, democracy, science, technology, engineering, religion and immigration and responsible for the rise of capitalism and the role on globalization too. The Transatlantic couple is then easily seen as the global hegemon and the face of the international system without any accountability that their strength and mighty development was built on the back and discreditation of the BAME population ().

What also perpetuates widely the state of devaluation that the non-white nations currently find themselves in is the lack of international opportunity and mobility, poverty and debt crisis, uneven development rates with a rapid population growth to a bad wealth distribution, disproportionate citizenship status, high migration flow (LeMelle,1972), neo-colonialism and dependency on Western states, unbalanced of life chances and success by race and by state/region, and a white privileged global society.

2019-1-17-1547693961

Effects the murder of Stephen Lawrence on policing procedures: custom essay help

This essay will analyse the effects the murder of Stephen Lawrence, which lead to the Macpherson report had on changes in police procedures and policy; especially concerning ethnic communities. The report itself includes seventy proposals of recommendation to tackle racism in the police force; with the race relations legislation being an important policy to improving procedures as well as an investigation into the Metropolitan police force for institutional racism and the failures regarding procedure surrounding the Stephen Lawrence case (The Guardian, 1999).

The murder of eighteen-year-old Stephen Lawrence occurred 22nd April 1993 where the young man was stabbed resulting in his death; although, it was not until January 2012 that two individuals were found guilty of his murder (BBC, 2018). The Macpherson report that followed the murder outlined changes in practice this included the “abolishment of the double jeopardy rule”; before this law was abolished an individual could not be tried again for a crime they had previously been found not guilty for, this was a vital change in policy as it led to the conviction of the individuals found guilty of Stephen Lawrence’s murder (The Guardian, 2013). This proposes a positive effect on policing procedures as cases succeeding the Stephen Lawrence case may have not achieved a conviction had the double jeopardy rule not of been abolished. This is evident in the ‘Babes in The Woods’ murder as this change in policy had a positive effect on policing procedures as the police were able to use new forensic evidence thirty-two years later to convict the murderer of the two young victims (BBC, 2018).

Bowling and Phillips (2002, cited in Newburn, 2017, p.854) suggested that the recommendations that were outlined in the Macpherson report led “to the most extensive programme of reform in the history of the relationship between the police and ethnic minority communities.” This suggests a positive effect of the Macpherson report as due to some of the changes in policy and police procedures actioned by the recommendations outlined has meant that the police have begun to regain the trust of ethnic minority communities; which will support police practice in the future.

The 2009 Home Affairs committee report written ten years after the Macpherson publication highlights if and how the seventy improvement proposals outlined in the report had been met at time of publication; the Home Affairs report highlighted that Dwayne Brooks suggested an important area for progression was the “introduction of appropriately trained family liason officers in critical incident” (Parliament, 2009). The report highlights that this key improvement in police procedure surrounding appropriate training for family liason officers to deal with critical incidents has improved family liason officer’s ability to be able to ‘maintain relationships with families’, whilst obtaining necessary evidence and improving confidence in the police within the black community (Parliament, 2009). Thus, suggesting that this change in policing procedure and policy, due to the Macpherson report, has had a positive effect, especially within the ethnic community. The report also highlights that this change in policing procedure and policy has positively affected homicide detection rates which the report indicated at 90%, which is “the highest of any large city in the world” (Parliament,2009).

However, there are still issues surrounding police procedures especially within ethnic minority communities, in which the Macpherson report improvements may not have positively been actioned. This can be seen in the stop and search rates; policing statistics published by the government for the time period of 2016/2017 suggests that the ratio for white individuals stopped and searched was 4:1000 whereas the ratio for black individuals was 29:1000 (Gov, 2018). Thus, suggesting that police are stopping more black individuals that white, which may still suggest an element of institutional racism in the way police conduct this procedure.

The prison reform trust also highlights an over representation of Black minority ethnic groups (BMES) in prisons with the supporting evidence of the Lammy review, the reform trust suggests that there is a clear correlation between the ethnicity of an individual and custodial sentences being issued (Prison reform trust, 2019). Thus, suggesting discrimination in police procedures and the court system. Therefore, this may suggest that the Macpherson reports improvements have not positively been actioned in some elements of the criminal justice system.

In conclusion, where the recommendations have been put into place and are actively being worked upon, the Macpherson report has provided positive effects on police procedures and policy. However, evidence such as the stop and search statistics shows that there are still issues in policing procedures and policy that need to be addressed.

2019-1-17-1547744694

Boohoo marketing and communication (PESTEL, SWOT)

This report will focus on a three-year Marketing Strategy Plan for Boohoo and a one-year Communication Plan to explore what improvement Boohoo can make across their shopping experience, their site and their social media to drive sales in the UK market.

Methodology

Primary research

For my primary research I made a questionnaire on Survey Monkey to find out customer experience with brand Boohoo. The sample of people used in the primary research ranged from the ages of 16 – 25 who are the main consumers Boohoo targets. Secondary research was carried out through websites such as WGSN.

Brand History

Boohoo is a UK online fashion retailer. It was founded in 2006 by Mahmud Kamani and Carol Kane. The brand specifies in its own brand fashion clothing selling over 36,000 products, including accessories, clothing, footwear and health and beauty. Boohoo also run BoohooMAN, NastyGal and PrettyLittleThing and are all targeted at 16-24 year olds.

Mission statement

Here at boohoo we are very proud of our brand and what we have achieved. Day to day we live by four key values that help us to continue to succeed and are at the heart of everything we do. This is our PACT, the values that seal the deal for boohoo..

The key issues boohoo face are that there are many retailers out there are all very similar including Miss Guided and Pretty Little Thing.

Maco/micro trends

Macro Trends

Political factors

Wage legislation – minimum wage and overtime
Work week regulations in retail
Product labelling
Taxation – tax rates and incentives
Mandatory employee benefits

Economic factors

Exchange rates
Labour costs
Economic growth rate
Unemployment rate
Interest rates
Inflation rates
Education level in the economy

Social factors

Class structure/hierarchy
Power structure in the society
Leisure interests
Attitudes (health, environmental consciousness)
Demographics and skill level of the population

Technological factors

Recent technological developments by Boohoo competitors
Impact on cost structure in Retail industry
Rate of technological diffusion
Technology’s impact on product offering

Environmental factors

Climate change
Weather
Recycling
Air and water pollution regulation in Retail Industry
Waste management in consumer services sector

Legal factors

Copyright
Data protection
Employment law
Health and safety law
Discrimination law

SWOT

Strengths

Strong distribution network – Boohoo have built a trustworthy distribution network which is able to reach most of its potential market
New markets – Boohoo has been entering new markets and making success of them such as BoohooMAN and PrettyLittleThing. The development has helped Boohoo build a new revenue.
Good returns on capital expenditure – Boohoo have made good returns on capital expenditure by making new revenue streams.
Reliable suppliers – Boohoo has a strong reliable supplier of raw material therefore enabling the company to overcome any supply chain holdups

Weaknesses

Investments in new technologies – The scale of expansion and geographies that Boohoo are planning to expand into, they will need to put more money into technology as the investment In technologies is not at balance with the vision of the company.
The profitability ratio and net contribution % of Boohoo are below the industry average.
Global competition such as Missguided, Topshop, Asos and H&M
No flagship stores

Opportunities

Opening up of new markets because of government agreement – the approval of new technology standard and government free trade agreement has provided Boohoo an opportunity to enter a new emerging market
Lower inflation rate – the low inflation rate brings more stability in the market and enable credit at lower interest to Boohoo customers
New technology gives Boohoo an opportunity to maintain its loyal customers with great service and lure new customers through other value positioned plans.
Continue using celebrity endorsements
Creating an online chat on their website that allows customers to receive 24-hour help.

Threats

Poor quality products compared to Boohoo competitors
Increased competition within the industry
Technological developments by competitors – new technological developments by competitors pose a threat towards Boohoo as customers attracted to this new technology can be lost to competitors which will decrease Boohoo overall market share.

Competitor analysis

The retail industry face a strong competition as Boohoo have many competitors such as ASOS, Missguided, New Look and H&M. A way that most consumers experiment all different trends with different brands is to choose cheaper retailers which in this case would be Boohoo.

2019-1-18-1547810074

Why do employees leave organisations? / Can a business force an employee to retire?: college essay help

Some of the reasons employees leave organisations are due to poor culture, poor work life balance, need for greater flexibility, poor management vs employee relationship, lack of communication, poor pay, no room for growth and poor working conditions, However, employees will stay with an organisation if there is a good work life balance, a sense of reward, good benefits packages, competitive salaries, fun environment to work, recognition and a financial need.

Peter Cheese, chief executive of the Chartered Institute of Personnel and Development, said: “It definitely takes time to get a new employee up to speed. It depends on the nature of the job; on one end of the spectrum, somewhere like McDonald’s can get new employees up to speed very quickly. On the other hand, there is a business development person in a professional development organisation where you’ve got to spend quite some time understanding the network and building connections to the client base and so forth, then three to six months is probably fairly typical” (Replacing Staff Cost)

It’s important to understand the reasons why employees wish to leave an organisation as there are costs associated with a dysfunctional employee turnover, these costs may not only be financial but can also be Intrinsic and reputational. Intrinsic knowledge loss is difficult to measure, but would be a loss either way, if an employee brought clients to the business or has built fantastic relationship with clients whilst employed, that employee leaving the business would be detrimental. Reputational can cost the business immensely, if an organisation doesn’t treat their employees or ex-employees well and this becomes knowledgeable, it can be hard for the organisation to attract good talent and clients. According to an article in the telegraph (Financial) Financial costs in replacing staff can cost up to £4billion a year, that’s an average of £30k per person.

One method for retaining talent in an organisation is to ensure there is an open and inclusive culture which promotes communication. One way of doing this could be to ensure language used by the senior team inclusive of HR is as Lucy Adams say “human”. (Human) Adams surmises that often when we use jargon the company can end up creating a distance between themselves and the employee; whereas if we converse in a human approach using everyday language, we have the opportunity to create a more cohesive working team, where employees can feel involved in dialogue which could leader to greater engagement but encouraging a more human approach.

Of course, “jargon” has come about due to both a cultural need for some departments such as HR to retain a professional and non-committal distance. For example, if HR as an advisory agent apologised directly for offence caused to an employee, there could be ramifications later through allegations of admittance in sensitive situations.

Therefore, it’s vital that when encouraging engagement though treating employees as humans and not numbers that due consideration is provided to the wider ramifications of a change in language and method of communication should also be considered such as platform, surveys and daily briefings.

Another method employed by businesses is the approach of ensuring employees receive a greater work life balance. This can be done through several mediums:

Flexible Working (statutory and company culture)
Seeing the individual a whole
Introduction of TOIL – for additional hours worked
Implementing training for time management, According to HR Review (Poor TM) its “One of the biggest causes of stress in the workplace is poor time management”
Increased leave benefits (holiday, paternity, maternity) Maternity for example According to Glass door (Women) Accenture pay, 9 months full pay

Such promotions can clearly be open to abuse and this can be a possible downside.

—-

Under the Equality Act 2010 [Equ Act2010], The Advisory, Conciliation and Arbitration Service (Acas) stated that when managing retirement [Retirement], older workers can voluntarily retire at a time they choose and draw any occupational pension they are entitled to. However, employers cannot force employees to retire or set a retirement age unless it can be objectively justified as what the law terms ‘a proportionate means of achieving a legitimate aim – [Please see appendix 8], Acas have said that a direct question such as ‘when are you retiring’ should be avoiding, instead open ended questions such as ‘where they see them selves in a few years and there contribution to the organisation, this could be done during a performance development review. An employee can change their mind at anytime about retiring until they have handed in their formal notice

An employer cannot compulsorily retire an employee, as this would leave the employer open to a complaint of unfair dismissal.When managing a dismissal, ACAS states [Dismissal 2019 ] its always best to try and resolve any issues informally first.

According the Employment Rights Act 1996 [ERA 1996] employees have the right not to be unfairly dismissed, companies need to set out clear rules and procedures and act consistently when handling disciplinary procedures and to ensure employees and managers understand the procedures and rules.

One of the following reasons along with a fair procedure, an employee can be fairly dismissed; capability (including the inability to perform competently) redundancy, conduct or behaviour, breach of a statutory restriction (such as employing someone illegally) or some other substantial reason (such as a restructure that is not a redundancy).

Before holding a disciplinary hearing, an investigation should be carried out and the employee given any evidence in time to prepare for the meeting, the employee should also be given the opportunity to bring a trade union rep or a colleague, although they can’t answer any questions, they can ask them.

The employee should be given opportunity to share their side of the situation and challenge evidence.

If the disciplinary is based on performance, the employee should be given support and training and an opportunity to improve, Companies should not sack employees for a first offence unless its gross misconduct and a penalty should reflect the seriousness of the act, staff can usually appeal against Verbal, written 1st and final warnings.

If an Employee has been with the company less than two years, they do not have unfair dismissal rights, with exceptions around discrimination and equality.

CIPD tells us that [redundancy CIPD] redundancy is a special form of dismissal which happens when an employer needs to reduce the size of its workforce. An employee is dismissed for redundancy if the following conditions are satisfied:

the employer has ceased, or intends to cease, continuing the business, or
the requirements for employees to perform work of a specific type, or to conduct it at the location in which they are employed, has ceased or diminished, or is expected to do so.

If there is a genuine redundancy, employers that follow the correct procedure will be liable for:

a redundancy payment, and
notice period payment.

Employers don’t follow the correct procedure may be liable for unfair dismissal claims or protective awards. Redundancy legislation is complex and is covered by statute and case law, with both determining employers’ obligations and employees’ rights.

2019-1-18-1547854403

Should we fight against tort reform?: essay help site:edu

The controversy around tort reform has turned into a two-sided debate between citizens and corporates. With the examination of various cases in recent years, it is clear that the effects of tort reform have proven to be negative for both sides. This issue continues to exist today, as public relations and legislature show a clear difference in opinion. In the event that tort reform occurs, victims and plaintiffs will be prevented from being fully replenished from the harm and negativity that they suffered, making this process of the civil justice system unfair.

In the justice system, there are two forms of law: criminal law, and civil law. The most well known form of law is probably criminal law. Criminal law is where the government (prosecutor) fights a defendant regarding a crime that may or may not have been committed. Contrary to this, civil law has a plaintiff and a defendant who fight over a tort. As stated in the dictionary, a tort is “a wrongful act or an infringement of a right (other than under contract) leading to civil legal liability”. In hindsence, a tort correlates to that of a crime in a criminal case.

Tort reform refers to the passing legislature or when a court issues a ruling that limits in some way the rights of an injured person to seek compensation from the person who caused the accident (“The Problems…Reform”). Tort reform also includes subtopics such as public relations campaign, caps on damages, judicial elections, and mandatory arbitration. Lawmakers across the United States have been heavily involved with tort reform since the 1950s, and it has only grown in popularity since then. Ex-president George W. Bush urged Congress to make reform in 2005 and brought tort reform to the table like no other president.

The damages that are often referred to in civil lawsuits are economic damages and non-economic damages. An economic damage is any cost that is a result of the defendant’s actions. For example, medical bills or money to repair things. Non-economic damages refer to emotional stress, post-traumatic stress disorder, and other impacts not related to money. A cap on damages “limits the amount of non-economic damage compensation that can be awarded to a plaintiff” (US Legal Inc).

Caps on damages are the most common practice of tort reform. In New Mexico, Susan Seibert says that she was hospitalized for more than nine months because of a doctor messing up during her gynecological procedure. After suing, she was supposed to receive $2.6 million in damages, which was then reduced to $600,000 because of a cap on damages. Seibert still suffers from excessive amounts of debt as a result of not being given the proper amount of money that she deserved. Caps on damages highly impacts the plaintiffs in a case. As priorly mentioned, plaintiffs sue because they need money in order to fully recover from the hardship in which they endured as a result of the defendants actions.

A type of tort reform that is not as well known is specialized medical courts. Currently, all medical malpractice courts have juries that have little to no background regarding medical information. This has been working very well because it means that an unbiased verdict is decided. However, the organization Common Good is trying to pass the creation of special medical courts. In this, the jury and judge will be trained medical professionals who will deeply evaluate the case. Advocates for this court feel that people will be better compensated for what they really deserve. However, the majority of the opinions on this court are against the idea of ths. The most concluded opinion of those who oppose this new system believe that it would put the patients at a disadvantage. It is more likely that the trained medical judges and juries will side with the doctor/surgeon/defendant than siding with the plaintiff. They believe that the most fair and efficient way to judge medical malpractice cases would be to use the existing civil justice system. One of the most famous medical malpractice cases involving Dana Carvey was ended in a settlement, but could have been much worse for Carvery if the judge and jury had been medical professionals. Carvey was receiving a double bypass and had a surgeon that operated on the wrong artery. In the event that this case went to a medical court, it is easily predictable that the verdict would have been that the doctor made a “just” mistake. The jury would have said that this mistake was nothing that was easily preventable, and it was something that could have been assumed as a risk going into the surgery. However, this case did not go to court, rather, it ended in a $7.5 million settlement.

Another form of tort reform is mandatory arbitration. Mandatory arbitration, as said in the article, “Mandatory Arbitration Agreements in Employment Contracts”, is “a contract clause that prevents a conflict from going to a judicial court”. This has affected many employers who have experienced sexual harassment, stealing of wages, racial discrimination, and more. Often times, “employees signed so-called mandatory arbitration agreements that are the new normal in American workplaces” (Campbell). These agreements are found under stacks of thousands of papers that have to be signed throughout the hiring process. The manager will force the new employee to sign these documents. Most of the time, these documents will not be called “Mandatory Arbitration Agreement”, rather, they could be called legalese names like “Alternative Dispute Resolution Agreement” (Campbell). “Between employee and employer, this means that any conflict must be solved through arbitration” (“Mandatory Arbitration Agreements in Employment Contracts”). When a conflict is solved through arbitration, “neutral arbiters” go through the evidence that the company/client present, and those arbiters decide what they think the just outcome should be, whether that is money, loss of a job, and more. This decision is known to be called the arbitration award.

A place where the effects of mandatory arbitration can be seen is the #MeToo movement. With the rise of this moment, more and more women have been coming out about their experiences with sexual harassment in the workplace. These women are then encouraged to fight against their harasser. Ultimately, many of these woman find out that they are not allowed to sue because of the mandatory arbitration agreements that they signed during the process of being hired into the job. In fact, Debra S. Katz wrote an article for The Washington Post called “30 million women can’t sue their employer over harassment”, proving how widespread the issue is. Evidently, this form of tort reform ruins the lives of over 30 million people annually. These woman could be suffering from post traumatic stress disorder, truma, and more from their experiences with sexual harassment. In the event that this form of tort reform is not banished, more and more woman will be suffering from mandatory arbitration.

By limiting the amount of money and reparations that a defendant will have to pay a plaintiff, tort reforms benefit major corporations. However, on the opposite side of this, the plaintiff suffers extremely from these limitations. In many cases, a plaintiff will be suing because they need the money to recover fully from the event that took place. For example, in the documentary “Hot Coffee”, many tort cases were discussed. Throughout the cases, there were occurrences in which the plaintiff suffered from the current regulations regarding caps, mandatory arbitration, and more. Tort reform would further exacerbate the negatives of modern day civil court cases.

Groups such as the American Tort Reform Association (ATRA) and Citizens Against Law Abuse (CALA) have also been active in fighting for tort reform. Along with these suspicions, other issues with tort reform such as the fairness behind caps on damages have exposed inequity in the civil justice system. Supporters of tort reform have been rallying for a common goal: to limit the ability of citizens to take advantage of the litigation process to protect businesses and companies.

Victims and plaintiffs will be prevented from receiving the reparations that they deserve as a result of hardship, negativity, and suffrage from the defendant’s actions in the event that tort reform occurs. Caps on damages, special medical malpractice courts, and mandatory arbitration are just a few of the negative impacts that tort reform will allow. Victims and plaintiffs sue the defendant to be able to receive the full compensation that they deserve. It is hard enough as it is to fight against these major corporations, and tort reform would further exacerbate that. Americans have the right to a fair trial, and the implication of tort reform would take away that constitutionally given right. It is essential that Americans continue to fight against tort reform, as you never know if you may become the next victim.

2019-1-6-1546809813

Chinese suppression of Hong Kong

Would you fight for democracy? Its core principles are the beating heart of our society: providing us with representation, civil rights and freedom — empowering our nation to be just and egalitarian. However, whilst we cherish our flourishing democracy, we have blatantly ignored one of the most portentous democratic crises of our time. The protests in Hong Kong. Sparked by a proposed bill allowing extradition to mainland China, the protests have ignited the city’s desire for freedom, democracy and autonomy; and they have blazed into a broad pro-democracy movement, opposing Beijing’s callous and covert campaign to suppress legal rights in Hong Kong. But the spontaneity fueling these protests is fizzling out, as minor concessions fracture the leaderless movement. Without external assistance, this revolutionary campaign could come to nothing. Now, we, the West, must support protesters to fulfill our legal and moral obligations, and to safeguard other societies from the oppression Hong Kongers are suffering. The Chinese suppression of Hong Kong must be stopped.

Of all China’s crimes, its flagrant disregard for Hong Kong’s constitution is the most alarming. When Hong Kong was returned to China in 1997, the British and Chinese governments signed the Sino-Brititish Joint Declaration, allowing Hong Kong “a high degree of autonomy, except in foreign and defence affairs” until 2047. This is allegedly achieved through the “one country, two systems” model, currently implemented in Hong Kong. Nevertheless, the Chinese government — especially since Xi Jinpin seized power in 2013 — is relentlessly continuing to erode legal rights in our former colony. For instance, in 2016, four pro-democracy lawmakers — despite being democratically elected — were disqualified from office. Amid the controversy surrounding the ruling lurked Beijing, using its invisible hand to crush the opposition posed by the lawmakers. However, it is China’s perversion of Hong Kong’s constitution, the Basic Law, that has the most pronounced and crippling effect upon the city. The Basic Law requires Hong Kong’s leader to be chosen “by universal suffrage upon nomination by a broadly representative nominating committee”; but this is strikingly disparate to reality. Less than seven percent of the electoral register are allowed to vote for representatives in the Election Committee — who actually choose Hong Kong’s leader — and no elections are held for vast swathes of seats, which are thus dominated by pro-Beijing officials. Is this really “universal suffrage”? Or a “broadly representative” committee? This “pseudo-democracy” is unquestionably a blatant violation of our agreement with China. If we continue to ignore the subversion of the fundamental constitution holding Hong Kong together, China’s grasp over a supposedly “autonomous” city will only strengthen. It is our legal duty to hold Beijing to account for these heinous contraventions of both Hong Kong’s constitution and the Joint Declaration — which China purports to uphold. Such despicable and brazen actions, whatever the pretence, cannot be allowed to continue.

The encroachment of their fundamental human rights is yet another travesty. Over the past few years, the Chinese government has been furtively extending its control over Hong Kong. Once, Hong Kongers enjoyed numerous freedoms and rights; now, they silently suffer. Beijing has an increasingly pervasive presence in Hong Kong, and, emboldened by a lack of opposition, it is beginning to repress anti-chinese views. For example, five booksellers, associated with one Hong Kong publishing house, disappeared in late 2015. The reason? The publishing house was printing a book — which is legal in Hong Kong — regarding the love-life of the Chinese president Xi Jinpin. None of the five men were guilty; all five men later appeared in custody in mainland China. One man even confessed on state television, obviously under duress, to an obscure crime he “committed” over a decade ago. This has cast a climate of paranoia over the city, which is already forcing artists to self-censor for fear of Chinese retaliation; if left unchecked, this erosion of free speech and expression will only worsen. Hong Kongers now live with uncertainty as to whether their views are “right” or “wrong”; is this morally acceptable to us? Such obvious infringements of rights to free speech are clear contraventions of the core human rights of people in Hong Kong. Furthermore, this crisis has escalated with the protests, entangling violence in the political confrontations. Police have indiscriminately used force to suppress both peaceful and violent protesters, with Amnesty International reporting “Hongkongers’ human rights situation has violations on almost every front”. The Chinese government is certainly behind the police’s ruthless response to protesters, manipulating its pawns in Hong Kong to quell dissent. This use of force cannot be tolerated; it is a barefaced oppression of a people who simply desire freedom, rights and democracy and it contradicts every principle that our society is founded upon. If we continue abdicating responsibility for holding Beijing to account, who knows how far this crisis will deteriorate? Beijing’s oppression of Hong Kongers’ human rights will not disappear. Britain — as a UN member, former sovereignty of Hong Kong and advocate for human rights — must make a stand with the protesters, who embody the principles of our country in its former colony.

Moreover, if we do not respond to these atrocities, tyrants elsewhere will only be emboldened to further strengthen their regimes. Oligarchs, autocrats and dictators are prevalent in our world today, with millions of people oppressed by totalitarian states. For instance, in India, the Hindu nationalist government, headed by Narendra Modi, unequivocally tyrannize the people of Kashmir: severing connections to the internet, unlawfully detaining thousands of people and reportedly torturing dissidents. The sheer depravity of these atrocities is abhorrent. And the West’s reaction to these barbarities? We have lauded and extolled Modi as, in the words then-president Barack Obama, “India’s reformer in chief”, apathetic to the outrages enacted by his government. This exemplifies our seeming lack of concern for other authoritarian regimes around the world: from our passivity towards the Saudi Arabian royal family’s oppressive oligarchy to our unconcern about the devilish dictatorship of President Erdoğan in Turkey. Our hypocrisy is irrefutable; this needs to change. The struggle in Hong Kong is a critical turning point in our battle against such totalitarian states. If we remain complacent, China will thwart the pro-democracy movement and Beijing will continue to subjugate Hong Kong unabashed. Consequently, tyrants worldwide will be emboldened to tighten their iron fists, furthering the repression of their peoples. But, if we support the protesters, we can institute a true democracy in Hong Kong. Thus, we will set a precedent for future democracies facing such turbulent struggles in totalitarian states, establishing an enduring stance for Western democracies to defend. But to achieve this, we must act decisively and immediately to politically pressure Beijing to make concessions, in order to create a truly autonomous Hong Kong.

Of course, the Chinese government is trying to excuse their actions. They claim to be merely maintaining order in a city of their country, while Western powers fuel protests in Hong Kong. Such fabrications from Chinese spin-doctors are obviously propaganda. There is absolutely no evidence to corroborate their claim of “foreign agents” sparking violence in Hong Kong. And, whilst some protesters are employing aggressive tactics, their actions are justified: peaceful protests in the past, such as the Umbrella Movement of 2014, yielded no meaningful change. Protesters are being forced to violence by Beijing, who are stubborn to propose any meaningful reforms.

Now, we face a decision, one which will have profound and far-reaching repercussions for all of humanity. Do we ignore the egregious crimes of the Chinese government, and in our complacency embolden tyrants worldwide? Or do we fight? Hong Kongers are enduring restricted freedoms, persecution and a perversion of their constitution; we must oppose this oppression resolutely. Is it our duty to support the protesters? Or, is democracy not worth fighting for?

2019-10-11-1570808349

Occurrence and prevalence of zoonoses in urban wildlife: essay help online free

A zoonosis is a disease that can be transmitted from animals to humans. Zoonoses in companion animals are known and described extensively. A lot of research has already been done, Rijks et al (2015) for example lists the 15 diseases of prime public health relevance, economical importance or both (Rijks(1)). Sterneberg-van der Maaten et al (2015) composed a list of the 15 priority zoonotic pathogens, which includes the rabies virus, Echinococcus granulosus, Toxocara canis/cati and Bartonella henselae (Sterneberg-van der Maaten(2)).

Although the research is extensive the knowledge about zoonoses and hygiene instruction of owners, health professionals and other related professions, like pet shop employees, is low. According to Van Dam et al (2016) (3)77% of the pet shop employees does not know what a zoonosis is and just 40% of the pet shops has a protocol for hygiene and disease prevention. 27% of the pet shops and asylums give instruction to their clients about zoonoses. It may therefore be assumed that the majority of the public is unaware of the health risks involving companion animals like cats and dogs. Veterinarians give information about responsible pet ownership and the risks when the pet owner visits the clinic (Van dam(3), Overgaauw (4)). In other words, dissemination obtained from research has not occurred effectively.

However, urban areas are not only populated with domestic animals. There is also a variety of non- domesticated animals living in close vicinity of domesticated animals and the human population, the so-called the urban wildlife. Urban wildlife is defined as any animal that has not been domesticated or tamed and lives or thrives in an urban environment (freedictionary(5)). Just like companion animals, urban wildlife carries pathogen that are zoonotic, for example Echinococcus multilocularis. This is a parasite that can be transmitted from foxes to humans. Another example is the rabies virus, which is transmitted by hedgehogs and bats. Some zoonotic diseases can be transmitted to humans from different animals. Q-fever occurs in mice, foxes, rabbits and sometimes even in companion animals.

There is little knowledge about the risk factors that influence the transmission of zoonoses in urban areas (Mackenstedt(6)). This is mostly due to the lack of active surveillance of carrier animals. This surveillance requires fieldwork, which is expensive and time-consuming. Often there is no immediate result for public-health authorities. This is why surveillance often is initiated during or after an epidemic (Heyman(7)). Meredith et al (2015) mentioned that due to the unavailability of a reliable serological test, for many species it is not yet know what the contribution is to the transmission to human (Meredith(8)).

The general public living in urban areas is largely unaware of the diseases transmitted from the urban wildlife that is present in their living area (Himsworth(9)), (Heyman(7)), (Dobay(10)), (Meredith(8)). Since all these diseases can also be a risk for the public health and the public may need to be informed of these risks.

The aim of this study is to determine the occurrence and prevalence of zoonoses in urban wildlife. To do this, the ecological structure of an European city will be investigated first, to determine wildlife living in the urban areas. Secondly, an overview of the most common and important zoonoses in companion animals will be discussed. Followed by zoonoses in urban wildlife.

2. Literature review

2.1 Ecological structure of the city

Humans and animals live closely together in cities. Both companion animals and urban wildlife share the environment with humans. Companion animals are important to human society. They perform working roles (dogs for hearing of visually impaired people) and they play a role in human health and childhood development (Day(11)).

A distinction can be made between animals that live in the inner city and animals that live in the outskirts of the city. The animals that live in the majority of the European inner cities are: brown rats, house mice, bats, rabbits and different species of birds. Those living outside of the stone inner city are other species of mice, hedgehogs, foxes and moles (Auke Brouwer(12)). In order to create safe passage for this particular group of animals, ecological structures are created. The structure also includes wet passageways for amphibia and snakes and dry passageways like underground tunnels, special bridges and cattle grids (Spier M(13)).

A disadvantage of human and animals living in close vicinity of each other is the possibility of transmitting diseases (Auke Brouwer(12)). Diseases can be transmitted from animals to humans in different ways. A few examples are: through eating infected food, inhalation of aerosols, via vectors or fecal-oral contact (WUR(14)). The most relevant ways of transmission for this review are: indirect physical contact (e.g. contact with contaminated surface), direct physical contact (touching an infected person or animal), through skin lesions, fecal-oral transmission and airborne transmission (aerosols). In the following section an overview of significant zoonoses of companion animals will be described. This information will enable a comparison with urban wildlife zoonoses later in this review.

2.2 Zoonoses of cats and dogs

There are many animals living in European cities. Both companion animals and urban wildlife. 55- 59% of the Dutch households has one or more companion animals (van Dam(3)). This includes approximately 2 million dogs and 3 million cats (RIVM(15)). In all of Europe live approximately 61 million dogs and 66 million cats. Owning a pet has many advantages, but companion animals are also able to transmit diseases to humans (Day(11)). In the following section significant zoonoses for companion animals will be described.

A. Bartonellosis (cat scratch disease)

Bartonellosis is an infection by Bartonella henselae or B. clarridgeiae. Most infections in cats are thought to be subclinical. If disease does occur, the symptoms are mild and self-limiting, characterized by lethargy, fever, gingivitis, uveitis and nonspecific neurological signs (Weese JS(16)). The seroprevalence in cats is 81% (barmettler(17)).

Humans get infected by scratches or bites and sometimes by infected fleas and ticks. In the vast majority of cases, the infection is also mild and self-limiting. The clinical signs in humans include development of a papule at the site of inoculation, followed by regional lymphadenopathy and mild fever, generalized myalgia and malaise. This usually resolves spontaneously over a period of weeks to months (Weese JS(16)).

Few cases of human bartonella occur in The Netherlands. Based on laboratory diagnosis done by the RIVM, the bacteria causes 2 cases per 100.000 humans each year. However, this could be ten times higher, since the disease is mild and self-limiting most of the time, so most people do not visit a health care professional (RIVM(18)).

B. Leptospirosis

This disease is caused by the bacteria Leptospira interrogans. According to Weese et al (2002) leptospirosis is the most widespread zoonotic disease in the world. The bacteria can infect a wide range of animals (Weese(16)).

Leptospirosis is in dogs and cats a relatively small zoonosis. It is not know exactly how many dogs are infected annually subclinically or asymptomatically, but according to Houwers et al (2009), each year around 10 cases occur in The Netherlands (Houwers(19)). RIVM states that each year 0,2 cases per 100.000 humans occur (RIVM(20)).

Infection in dogs is called Weill’s disease. Clinical signs can be peracute, acute, subacute and chronic. A peracute infection usually results in in sudden death with few clinical signs. Dogs with an acute infection are icteric, have diarrhea, vomit and may experience peripheral vascular collapse. The subacute form is generally manifested as fever, vomiting, anorexia, polydipsia, dehydration and in some cases severe renal disease can develop. Symptoms of a chronical infections are: fever of unknown origin, unexplained renal failure, or hepatic disease and anterior uveitis. The majority of infections in dogs are subclinical or chronic. In cats clinical disease is infrequent (Weese(16)).

According to Barmettler et al (2011), the risk of transmission of Leptospira from dogs to humans is just theoretical. All tested humans were exposed to infected dogs, but all were seronegative to the bacteria (Barmettler(17)).

The same bacteria that causes leptospirosis in dogs is responsible for the disease in rats, namely Leptospira interrogans. This bacteria is considered the most widespread zoonotic pathogen in the world and rats are the most common source of human infection, especially in urban areas (Himsworth(21)). According to the author, the bacteria asymptomatically colonizes the rat kidney and the rats shed the bacteria via the urine (Himsworth(9)). Bacteria can survive outside the rats for some time, especially in a warm and humid environment (RIVM(20)).

People become infected through contact with urine, or through contact with contaminated soil or water (Himsworth (21)). The Leptospira-bacteria can enter the body via the mucous or open wounds (Oomen(22)). The symptoms and severity of disease can be highly variable, ranging from asymptomatic to sepsis and death. Common complaints are: headache, nausea, myalgia and vomiting. Moreover, neurologic, cardiac, respiratory, ocular and gastrointestinal manifestations can occur (Weese JS(16)).

The prevalence in rats differs between cities and even between locations in the same city. Himsworth (2013) states that in Vancouver 11% of the tested rats was positive for Leptospira (Himsworth(9)). Another study by Easterbrook (2007) found 65,3% of all tested rats in Baltimore to be positive for the bacteria (Easterbrook(23)). Krojgaard (2009) found a prevalence between 48% and 89% in different location in Copenhagen (Krojgaard(24)).

C. Dermatophytosis (ringworm)

Dermatophytosis is a fungal dermatologic disease, caused by Microsporum spp. or Trichophyton spp. It causes disease in a variety of animals (Weese(16)). According to Kraemer (2012), the dermatophytes that occur in rabbits are Trichophyton mentagrophytes and Microsporum canis. Although the former is more common(Kraemer(25)).

Dermatophytes live in keratin layers of the skin and cause ringworm. They depend on human or animal infection for survival. Infection occurs through direct contact between dermatophyte arthrospores and keratinocytes/hairs. Transmission through indirect contact also occurs, for example through toiletries, furniture or clothes (Donnelly(26), RIVM(18)). Animals (especially cats) can transmit M. canis infection while remaining asymptomatic (Weese JS(16)).

The symptoms in both animals and humans can vary from mild or subclinical to severe lesions similar to pemphigus foliaceus (itching, alopecia and blistering). The skin lesions develop 1-3 weeks after infection(Weese JS). Healthy, intact skin cannot be infected, but only mild damage is required to make the skin susceptible to infection. No living tissue is invaded, only the keratinized stratum corneum is colonized. However, the fungus does induce an allergic and inflammatory eczematous response in the host (Donelly(26), RIVM(18)).

Dermatophytosis is not commonly occurring in humans. RIVM states that each year, 3000 per 100.000 humans get infected. Children between the age of 4 and 7 are the most susceptible to the fungal infection. In cats and dogs, the prevalence of M. canis is much higher: 23,3% according to Seebacher(27). The prevalence in rabbits is 3.3% (d’Ovidio(28)).

D. Echinococcosis

Echinococcus granulosus can be transmitted from dogs to humans. Dogs are the definitive hosts, while herbivores or humans are the intermediate hosts. Dogs can become infected by eating infected organs, for example from sheep, pigs and cattle (RIVM(29)) . The intermediate hosts develop a hydatid cyst with protoscoleces after ingesting eggs produced and excreted by definitive hosts. The protoscoleces evaginate in the small intestine and attach there(MacPherson(30)).

In most parts of Europe, Echinococcus granulosus occurs occasionally. However, in Spain, Italy, Greece, Romania and Bulgaria the bacteria is highly endemic.

Animals, either as definitive or as intermediate hosts, rarely show symptoms.

Humans, on the other hand, can show symptoms, depending on the size and site of the cyst and the growth rate. The disease can become life-threatening if a cyst in lungs or liver bursts. In that case a possible complication is an anaphylactic shock (RIVM(29)).

In the Netherlands, echinoccosis rarely occurs in humans. Between 1978 and 1991, 191 new patients were diagnosed, but it is not known how many of these were new cases. The risk of infection is higher in the case of bad hygiene and living closely together with dogs (RIVM(29)). In a study done by Fotiou et al (2012) the prevalence of Echinococcus granulosus is 1,1% (Fotiou(31)). The prevalence in dogs is much higher: 10,6% according to Barmettler et al (17).

E. Toxocariasis

Toxocariasis is caused by Toxocara canis or Toxocara cati. Toxocara is present in the intestine of 32% of all tested dogs, 39% of tested cats and 16%-26% of tested red foxes (Luty(32), LETKOVÁ(33)). In dogs younger than 6 weeks the prevalence can be up to 80% (Kantere) and in kittens of 4-6 months old it can be 64% (Luty(32)). The host becomes infected by swallowing the parasites embryonated eggs (Kantere(34)).

Dogs and red foxes are the definitive host of T. canis, cats of T. cati (Luty(32)). Humans are paratenic hosts. After ingestion, the larvae hatch in the intestine and migrate all over the body via blood vessels (visceral larva migrans). In young animals the migrations occurs via the lungs and trachea. After swallowing, the larvae mature in the intestinal tract.

In paratenic hosts and adult dogs that have some degree of acquired immunity, the larvae undergo somatic migration. There they remain as somatic larvae in the tissues. If dogs eat a Toxocara-infected paratenic host, larvae will be released and develop to adult worms in the intestinal tract (MacPherson(30)).

Humans can be infected by oral ingestion of infective eggs from contaminated soil, from unwashed hands or consumption of raw vegetables (MacPherson(30)).

The clinical symptoms in animals depend on the age of the animal and number, location and stage of development of worms. After birth, puppies can suffer from pneumonia because of tracheal migration and die in 2-3 days. 2-3 weeks after birth, puppies can show emaciation and digestive disturbance because of mature worms in the intestine and stomach. Clinical signs are: diarrhea, constipation, coughing, nasal discharge and vomiting.

Clinical symptoms in adult dogs are rare(MacPherson(30)).

In most human cases following infection by small numbers of larvae, the disease occurs without symptoms. Mostly children do get infected. VLM is mainly diagnosed in children of 1-7 years old. The symptoms can be general malaise, fever, abdominal complaints, wheezing or coughing. Severe clinical symptoms are mainly found in children of 1-3 years old.

Most of the larvae seem to be distributed to the brain and can cause neurological disease. Larvae do not migrate continuously. They rest periodically, and during such periods they induce an immunologically mediated inflammatory response (MacPherson(30)).

The prevalence in children is much lower than in adults, respectively 7% and 20%. The risk of infection with Toxocara spp. increases with bad hygiene (Overgaauw(36)). In the external environment, the eggs survive for months and consequently toxocariasis represents a significant public health risk (Kantere(34)) . High rates of soil contamination with toxocara eggs are demonstrated in parks, playgrounds, sandpits and other public places. Direct contact with infected dogs is not considered as a potential risk for human infection, because embryonation to the stage of infectivity requires a minimum of 3 weeks (MacPherson(30)).

F. Toxoplasmosis

Toxoplasmosis is caused by the protozoa Toxoplasma gondii. Cats are the definitive hosts and other animals and humans act as intermediate hosts. Infected cats excrete oocysts in the feces. These oocysts end up in the environment, where they are ingested by intermediate hosts (direct or indirect via food or water). In the intermediate hosts the protozoa migrates until it gets stuck. It is then encapsulated and stays at that place. If cats eat infected intermediate hosts they become infected.

Animals rarely show symptoms, although some young cats get diarrhea, encephalitis, hepatitis and pneumonia.

In most humans, infection is asymptomatic. Pregnant women can transmit the protozoa through the placenta and infect the unborn child. The symptoms in the child depend on the stage of pregnancy. An infection in early stages leads to severe deviations and in many cases to abortion. If the infection occurs in a later stage, premature birth is seen and symptoms of an infectious disease (fever, rash, icterus, anemia and an enlarged spleen or liver). Although, in most cases the symptoms start after birth. Most damage is done in the eyes (RIVM(37)).

Based on data of the RIVM and Overgaauw (1996) the disease that is most commonly transmitted to humans is toxoplasmosis. The prevalence was 40,5% in 1996. This number is reduced in the last few decades and Jones (2009) states that in 2009 the prevalence was 24,6% (Jones(38)). The prevalence rises with age, being 17,5% in humans younger than 20 years, and 70% in humans of 65 years and older. There is no increased risk of getting an infection if humans have a cat as a pet (RIVM(37)). Birgisdottir et al (2006) studied the prevalence in cats in Sweden, Estonia and Iceland. They found a prevalence of 54,9% , 23% and 9,8%, respectively in Estonia, Sweden and Iceland (Birgisdottir(39)).

G. Q-fever

The aetiological agent of Q-fever is the bacteria Coxiella burnetti. The bacteria has a very wide host range, including ruminants, birds and mammals such as small rodents, dogs, cats and horses. Accordingly, there is a complex reservoir system (Meredith(8)).

The extracellular form of the bacteria is very resistant, therefore it can be persistent in the environment for several weeks. It can also be spread by the wind, so direct contact with animals is not required for infection. Coxiella burnetti is found in both humans and animals in the blood, lungs, spleen, liver and during pregnancy in large quantities in the placenta and mammary glands. It is shed in urine and feces and during pregnancy in the milk (Meredith(8)).

Humans that live close to animals (like in the city) have a higher risk to get infected, since the mode of transmission is aerogenic or direct contact. The bacteria is excreted through the urine feces, placenta or amnionic fluid. After drying, it is aerogenically spread (RIVM(40)). Acute infection is characterized by atypical pneumonia and hepatitis and in some cases transient bacteraemia. The bacteria then haematogenously spreads, which results in an infection in the liver, spleen, bone marrow, reproductive tract and other organs. This is followed by the formation of granulomatous lesions in the liver and bone marrow and development of an endocarditis involving the aortic and mitral valve (Woldehiwet(41)).

On the other hand, there is little information about the clinical signs of Q fever in animals, but variable degrees of granulomatous hepatitis, pneumonia, or bronchopneumonia have been reported in mice (Woldehiwet(41)). In pregnant animals, abortion or low foetal birth weight can occur (Meredith(8), Woldehiwet(41)).

The prevalence in the overall human population in Europe is not high (2,7 %), but in risk groups like veterinarians, the prevalence can be as high as 83% (RIVM(40)).

Meredith et al, have developed a modified indirect ELISA kit adapted for use in multiple species. They tested the prevalence of C. burnetii in wild rodents (band vole, field vole and wood mouse), red foxes and domestic cats in the United Kingdom. The prevalence in the rodents was overall 17,3%. In cats it was 61.5% and in foxes 41,2% (Meredith(8)). In rabbits, the prevalence was 32,3% (González-Barrio(42)).

H. Pasteurellosis

Pasteurellosis is caused by Pasteurella multocida. This is a coccobacillus found in the oral, nasal and respiratory cavities of many species of animals (dog, cats, rabbits, etc). It is one of the most prevalent commensal and opportunistic pathogens in domestic and wild animals (Wilson(43), Giordano(44)). Human infections are associated with animal exposure, usually after animal bites or scratches (Giordano(44)). Kissing or licking of skin abrasions or mucosal surfaces of animals can also lead to infection. Transmission between animals is through direct contact with nasal secretions. (Wilson(43)).

In both animals and humans Pasteurella multocida causes chronic or acute infections that can lead to significant morbidity with symptoms of pneumonia, atrophic rhinitis, cellulitis, abscesses, dermonecrosis, meningitis and/or hemorrhagic septicaemia. In animals the mortality is significant, but not in human. This is probably due to the immediate prophylactic treatment of animal bite wounds with antibiotics. (Wilson(43))

Disease in animals appears as a chronic infection in nasal cavity, paranasal sinuses, middle ears, lacrimal and thoracic ducts of the lymph system and lungs. Primary infections with respiratory viruses or Mycoplasma species predisposes to a Pasteurella infection (Wilson(43)).

The incidence in humans is 0,19 cases per 100.000 humans (Nseir(45)). The prevalence in dogs and cats is 25-42% (Mohan(46)). The only known prevalence in rabbits is a 29,8% in laboratory animal facilities (Kawamoto(47)).

The majority of the human population lives in cities. As a result of this, in some countries the urban landscape encompasses more than half of the land surface. This leaves little space for the wildlife species living in the country. Some species are nowadays found more in urban areas than in their native environment. They have adapted to the urban ecosystems. This is a positive aspect for biodiversity in the cities. On the other hand, just like companion animals, this urban wildlife can transmit disease to humans (Dearborn(49)). In the following section, significant zoonoses of urban wildlife will be described.

A. Zoonoses of rats

The following zoonoses occur urban rats: Leptospirosis (see 2.2B) and rat bite fever.

Rat bite fever

The rat bite fever is caused by Streptobacillus moniliformis or S. minis(Chafe(50)). These bacteria are part of the normal oropharyncheal flora of the rat and it is thought to be present in rat populations worldwide.

Since the bacteria are part of the normal flora, the rats are not susceptible to the bacteria. In people, on the other hand, the bacteria can cause rat bite fever. The transmission occurs through the bite of an infected rat and through ingestion of contaminated food. The latter causes Haverhill fever.

The clinical symptoms are fever, chills, headache, vomiting, polyarthritis and skin rash. In Haverhill fever pharyngitis and vomiting may be more pronounced. If not treated, S. moniliformis infection can progress to septicemia with a mortality rate of 7-13% (Himsworth(21)).

The prevalence of Streptobacillus spp. in rats is 25% (Gaastra(51)). According to Trucksis et al (2016), rat bite fever is very rare in humans. Only a few cases each year occur (Trucksis(52)).

B. Zoonoses of mice

The zoonotical diseases that occur in mice are: hanta viruses, lymphocytic choriomeningitis, tularemia and Q-fever (see 2.2 G).

Hanta viruses

There are different types of hanta viruses, each carried by a specific rodent host species. In Europe, three types occur: Puumala virus(PUUV), carried by bank vole; Dobrava virus(DOBV), carried by yellow-necked mouse; Saaremaa virus(SAAV), carried by the striped field mouse (Heyman(7)). SAAV has been found in Estonia, Russia, South-Eastern Finland, Germany, Denmark, Slovenia and Slovakia. PUUV is very common in Finland, Northern Sweden, Estonia, the Ardennes Forest Region, parts of Germany, Slovenia and in parts of European Russia. DOBV has been found in The Balkans, Russia, Germany, Estonia and Slovakia (Heyman(7)).

Hantaviruses are transmitted via direct and indirect contact. Infective particles fare secreted in feces, urine and saliva (Kallio(53)).

The disease is asymptomatic in mice (Himsworth(21)). Humans on the other hand do get symptoms. All types of the Hanta virus cause hemorrhagic fever with renal syndrome (HFRS), but they differ in severity. HFRS is characterized by acute onset, fever, headache, abdominal pains, backache, temporary renal insufficiency and thrombocytopenia. In DOBV the extent of hemorrhages, requirement for dialysis treatment, hypotension and case-fatality rates are much higher than in PUUV or SAAV. Mortality is very low (approximately 0.1%)(Heyman(7)).

Hanta viruses are an endemic zoonosis in Europe. Tens of thousands of people get infected each year (Heyman(7)). The prevalence in mice is 9,5% (Sadkowska(54)).

Lymphocytic choriomeningitis

Lymphocytic choriomeningitis is a viral disease, caused by an arena virus (Cahfe(50)). The natural reservoirs of arenaviruses are rodent species. They are asymptomatically infected (Oldstone(55)).

In humans the disease is characterized by varying signs, from inapparent infection to the acute, fatal meningoencephalitis. The transmission of the disease is through mice bites and material contaminated with excretions and secretions of infected mice (Cahfe(50)).

The virus causes little or no toxicity to the infected cells. The disease- and associated cell and tissue injury- are caused mostly by activity of the hosts immune system. The antiviral response produces factors that act against the infected cells and damage them. Another factor is the displacement of cellular molecules that are normally attached to cellular receptors by viral proteins. This could result in conformational changes, which causes the cell membrane to become fragile and interfere with normal signalling events (Oldstone(55)).

The prevalence of lymphocytic choriomeningitis in human is 1,1 %(Lledó(56). In mice, the prevalence is 2,4% (Forbes(57)).

Tularemia

Tularemia is caused by the bacterium Franscisella tularensis. Only few animal outbreaks have been reported and so far only one outbreak in wildlife has been closely monitored(Dobay(10)). The bacteria can infect a large number of animal species. Outbreaks among mammals and human are rare. However, outbreaks can occur when the source of infection is widely spread and/or many people or animals are exposed. Outbreaks are difficult to monitor and trace, because mostly wild rodents and lagomorphs are affected (Dobay(10)).

People get infected in five ways: ingestion, direct contact with a contaminated source, inhalation, arthropod intermediates and animal bites. In animals the route of transmission is not yet known. The research of Dobay et al(2015) suggests that tularemia can cause sever outbreaks in small rodents such as house mice. The outbreak is self-exhausting in approximately three months, so no treatment is needed (Dobay(10)).

Tularemia is a potentially lethal disease. There are different clinical manifestations, depending on the route of infection. The ulceroglandular form is the most common and occurs after handling contaminated sources. The oropharyngeal form can be caused by ingestion of contaminated food or water. The pulmonary, typhoidal, glandular and ocular forms occur less frequently (Dobay(10)), Anda(58)).

In humans the symptoms of the glandular and ulceroglandular form are cervical, occipital, axillary or inguinal lymphadenopathy. The symptoms of pneumonic tularemia are fever, cough and shortness of breath (Weber(59)). Clinical manifestation of the oropharyngeal form include adenopathies on the elbow/ armpit/both, cutaneous lesions, fever, malaise, chills and shivering, painful sore throat with swollen tonsils and enlarged cervical lymph nodes (Sahn(60), Anda(58)).

The clinical features in animals are unspecific and the pathological effects vary substantially between different animal species and geographical locations. The disease can be very acute (for example in highly susceptible species like mice), with development of sepsis, liver and spleen enlargement and pinpoint white foci in the affected organs. The subacute form can be found in moderately susceptible species like hares. The symptoms are granulomatous lesions in lungs, pericardium and kidneys.

Infected animals are usually easy to catch, moribund or even dead (Maurin(61)).

Rossow et al (2015) states that the prevalence in humans is 2% (Rossow(62)). Highest prevalence found in small mammals during outbreak in Central Europe is 3,9% (Gurycová(63)).

C. Zoonoses of foxes

The zoonosis that can be transmitted from foxes to human are Q-fever (see 2.2G), toxocariasis (see 2.2E) and echinococcus multilocularis.

Echinococcus multilocularis

This is considered one of the most serious parasitic zoonosis in Europe. The red foxes are the main definitive hosts. The natural intermediate host are voles, but a lot of animals can act as accidental hosts, for example monkeys, human, pigs and dogs. The larval stage of Echinococcus multilocularis causes Alveolar echinococcosis (AE). The infection is widely distributed in foxes, with a prevalence of 70% in some areas. RIVM states that the prevalence in The Netherlands is 10-13%. The prevalence in humans differs throughout Europe, and has to do with the prevalence in foxes. If the prevalence in foxes is high, the prevalence in human increases. However, there has not been reported a prevalence higher than 0,81 per 100.000 inhabitants (RIVM(29)). Foxes living in urban areas pose a threat to the public health and there is concern that that risk may rise due to the suspected geographical spread of the parasite (Conraths(64)).

In foxes the helminth colonizes the intestines, but it does not cause disease. In intermediate hosts and accidental hosts cysts are formed after oral intake of eggs excreted by foxes, which causes AE. The size, site and growth rate of the larval stage determine the symptoms. Most of the time, infection starts in the liver, causing local deviations. The larvae grow invasively to other organs and blood vessels. It can take five to fifteen years before clear symptoms show (RIVM(29)). In human AE is a very rare disease, but incidences have increased in recent years.

D. Zoonoses of rabbits

The zoonoses that can be transmitted from rabbits to human are: Pasteurellosis (see 2.2H), tularemia (see 2.3B), Q fever (see 2.2G), dermatophytosis (see 2.2C) and cryptosporidiosis.

Cryptosporidiosis

Cryptosporidium is a protozoa. It is considered the most important zoonotic pathogen causing diarrhea in humans and animals. In rabbits, Cryptosporidium cuniculus (rabbit genotype) is the most common genotype (Zhang(65)). Two large studies have been done in rabbits, they showed a prevalence between 0,0% and 0,9% in rabbits (Robinson(66)).

The risks of cryptosporidiosis for the public health from wildlife are poorly understood. No studies of the host range and biological features of the Cryptosporidium rabbit genotype were identified. However human-infectious Cryptosporidium (including Cryptosporidium parvum) have caused experimental infections in rabbits and there is some evidence that his occurs naturally (Robinson(66)).

In human and neonatal animals, the pathogen causes gastroenteritis, chronic diarrhea or even severe diarrhea (Zhang(65), Robinson(66)). In >98% of these cases, the disease is caused by C. hominis or C. parvum, but recently, the rabbit genotype has emerged as a human pathogen. Little is known yet about this genotype, because only a few cases in humans were reported (Robinson(66)). Since little isolates have been found in humans and little is known about human infection with Cryptosporidium rabbit genotype, Robinson et al (2008) assumed this genotype is insignificant to public health and further investigation is needed (Robinson(67)).

E. Zoonoses of hedgehogs

Hedgehogs pose a risk for a number of potential zoonotic disease, for example microbial infections like Salmonella spp, Yersinia pseudotuberculosis, Mycobacterium marinum and dermatophytosis.

Salmonellosis

Salmonellosis is the most important zoonotic disease in hedgehogs. The prevalence of Salmonella in hedgehogs is 18,9%. The infection can either be asymptomatic or symptomatic. The hedgehogs that do show symptoms can display anorexia, diarrhea and weight loss. Humans get infected through ingestion of the bacteria, after handling the hedgehog or contact with feces (Riley(68)).

The Salmonella serotypes that are associated with hedgehogs are S. tilene and S. typhimurium (Woodward(69), Riley(68)).

Clinical manifestations in human (mainly adults) of both serotypes involve self-limiting gastroenteritis (including headache, malaise, nausea, fever, vomiting, abdominal pain and diarrhea (Woodward(69))), but bacteriamia, localized and endovascular infections may also occur (Crum Cianflone(70)). Infection with S. typhimurium and S. tilene is rare in humans, approximately 0,057 per 100.000 inhabitants (CDC(71))

Yersinia pseudotuberculosis.

No clinical symptoms for Yersinia pseudotuberculosis infection in hedgehogs are described in the literature. However, this bacteria causes a gastroenteritis in humans, characterized by a self-limiting mesenteric lymphadenitis, which mimics appendicitis. Complications can occur, which include erythema nodosum and reactive arthritis (Riley(68)). Since only Riley et al (2005) reported a case concerning Y. pseudotuberculosis, no information in available yet about the prevalence in hedgehogs or humans, or about the route of transmission. Although Riley et al (2005) claim that the zoonosis in commonly occurring (Riley(68)).

Myobacterium marinum

Mycobacterium marinum infection is not common in hedgehogs. The bacteria causes systemic myocbacteriosis. The porte d’entrée of the bacteria is through a wound or abrasion in the skin and the bacteria spreads systemically through the lymphatic system. This is also the way in which hedgehogs transmit the bacteria to human; the spines of the hedgehog can cause wounds and the bacteria can enter. Symptoms in human consist of clusters of papules or superficial nodules and can be painful. (Riley(68)). No information is reported regarding the prevalence of the bacteria in hedgehogs or humans.

Dermatophytosis

Dermatophytosis has been seen in hedgehogs. The most isolated dermatophyte is Trichophyton mentagrophytes var. erinacei. Microsporum spp. have also been reported. Lesions in the hedgehog are similar to those in other species: nonpruritic , dry, scaly skin with bald patches and spine loss. Hedgehogs can also be asymptomatic carriers, and that is a risk for potential zoonotic transmission (Riley(68)).

In human, Trichophyton mentagrophytes var. erinacei causes a local rash with pustules at the edges and an intensely irritating and thickened area in centre of the lesion. This usually resolves spontaneously after 2-3 weeks (Riley(68)).

Few cases of Trichophyton mentagrophytes var. erinacei have been reported (Pierard-Franchimont(72), Schauder(73), Keymer(74)), but no prevalence is known for humans and hedgehogs.

F. Zoonoses of bats

According to Calisher et al (2009) bat viruses that are proven to cause highly pathogenic disease in human are rabies virus and related lyssaviruses, Nipah and Hendra viruses, and SARS-CoV-like virus (Calisher(75)). Only the former is relevant for this review, since Nipah and Hendra do not occur in Europe (Munir(76)) and SARS is not directly transmitted to human (Hu(77)).

Rabies virus and related lyssaviruses

The rabies virus is present in the saliva of infected animals. Accordingly, the virus is transmitted from mammals to human through a bite (Calisher(75)).

Symptoms are equal in animals and humans. The disease starts with a prodromal stage. Symptoms are non-specific, and consist of fever, itching and pain near the site of the bite wound.

Subsequently follows the furious stage. Clinical features are hydrophobia (violent inspiratory muscle spasms, hyperextension and anxiety after attempts to drink), hallucinations, fear, aggression, cardiac tachyarrhythmias, paralysis and coma.

The final stage is the paralytic stage. It is characterized by ascending paralysis and loss of tendon reflexes, sphincter dysfunction, bulbar/respiratory paralysis, sensory symptoms, fever, sweating, gooseflesh and fasciculation.

Untreaded, the disease is fatal in approximately five days after showing the first symptoms (Warrell(78)).

Lyssaviruses from bats are related to the rabies virus. There are seven lyssavirus genotypes. Some of these cause disease in human, similar to rabies. Others, on the other hand, do not cause disease. Although it is still unclear, transmission is thought to be through bites (Calisher(75)).

Since 1977 4 cases of human rabies coming from a bat bite have been reported in The Netherlands. In bats living there, the prevalence is 7% (RIVM).

2016-3-12-1457784290

Sickle-cell conditions

NORMAL HEMOGLOBIN STRUCTURE:

Hemoglobin is present in erythrocytes and is important for normal oxygen delivery to tissues. Hemoglobinopathies are disorders affecting the structure, function or production of hemoglobin.

Different hemoglobins are produced during embryonic, fetal and adult life. Each consists of a tetramer of globin polypeptide chains: a pair of ”-like chains 141 amino acids long and a pair of ”-like chains 146 amino acids long. The major adult hemoglobin, HbA has the structure ”2”2. HbF (”2”2) predominates during most of gestation and HbA2 (”2”2) is the minor adult hemoglobin.

Each globin chain surrounds a single heme moiety, consisting of a protoporphyrin IX ring complexed with a single iron atom in the ferrous state (Fe2+). Each heme moiety can bind a single oxygen molecule; a molecule of hemoglobin can transport up to four oxygen molecules as each hemoglobin contains four heme moieties.

The amino acid sequences of various globins are highly homologous to one another and each has a highly helical secondary structure. Their globular tertiary structures cause the exterior surfaces to be rich in polar (hydrophilic) amino acids that enhance solubility and the interior to be lined with nonpolar groups, forming a hydrophobic pocket into which heme is inserted Numerous tight interactions (i.e.,”1”1 contacts) hold the ” and ” chains together. The complete tetramer is held together by interfaces (i.e., ”1”2 contacts) between the ”-like chain of one dimer and the non-” chain of the other dimer. The hemoglobin tetramer is highly soluble, but individual globin chains are insoluble. (Unpaired globin precipitates, forming inclusions that damage the cell and can trigger apoptosis. Normal globin chain synthesis is balanced so that each newly synthesized ” or non-” globin chain will have an available partner with which to pair.)

FUNCTION OF HEMOGLOBIN:

Solubility and reversible oxygen binding are the two important functions which were deranged in hemoglobinopathies. Both depend mostly on the hydrophilic surface amino acids, the hydrophobic amino acids lining the heme pocket, a key histidine in the F helix and the amino acids forming the ”1”1 and ”1”2 contact points. Mutations in these strategic regions alter oxygen affinity or solubility.

Principal function of Hb is to transport oxygen and delivery to tissue which is represented most appropriately by oxygen dissociation curve (ODC).

Fig: The well-known sigmoid shape of the oxygen dissociation curve (ODC), which reflects the allosteric properties of haemoglobin.

Hemoglobin binds with O2 efficiently at the partial pressure of oxygen (Po2) of the alveolus, retains it in the circulation and releases it to tissues at the Po2 of tissue capillary beds. The shape of the curve is due to co-operativity between the four haem molecules. When one takes up oxygen, the affinity for oxygen of the remaining haems of the tetramer increases dramatically. This is because haemoglobin can exist in two configurations – deoxy (T) and oxy (R). The T form has a lower affinity than the R form for ligands such as oxygen.

Oxygen affinity is controlled by several factors. The Bohr effect (e.g. oxygen affinity is decreased with increasing CO2 tension) is the ability of hemoglobin to deliver more oxygen to tissues at low Ph. The major small molecule that alters oxygen affinity in humans is 2,3-bisphosphoglycerate (2,3-BPG; formerly 2,3-DPG) which lowers oxygen affinity when bound to hemoglobin. HbA has a reasonably high affinity for 2,3-BPG. HbF does not bind 2,3-BPG, so it tends to have a higher oxygen affinity in vivo. Increased levels of DPG, with an associated decrease in P50 (partial pressure at which haemoglobin is 50 per cent saturated), occur in anaemia, alkalosis, hyperphosphataemia, hypoxic states and in association with a number of red cell enzyme deficiencies.

Thus proper oxygen transport depends on the tetrameric structure of the proteins, the proper arrangement of hydrophilic and hydrophobic amino acids and interaction with protons or 2,3-BPG.

GENETICS OF HEMOGLOBIN:

The human hemoglobins are encoded in two tightly linked gene clusters; the ”-like globin genes are clustered on chromosome 16, and the ”-like genes on chromosome 11. The ”-like cluster consists of two ”-globin genes and a single copy of the ” gene. The non-” gene cluster consists of a single ” gene, the G” and A” fetal globin genes, and the adult ” and ” genes. The ”-like cluster consists of two ”-globin genes and a single copy of the ” gene. The non-” gene cluster consists of a single ” gene, the G” and A” fetal globin genes, and the adult ” and ” genes.

DEVELOPMENTAL BIOLOGY OF HUMAN HEMOGLOBINS:

Red cells first appearing at about 6 weeks after conception contain the embryonic hemoglobins Hb Portland (”2”2), Hb Gower I (”2”2) and Hb Gower II (”2”2). At 10’11 weeks, fetal hemoglobin (HbF; ”2”2) becomes predominant and synthesis of adult hemoglobin (HbA; ”2”2) occurs at about 38 weeks. Fetuses and newborns therefore require ”-globin but not ”-globin for normal gestation. Small amounts of HbF are produced during postnatal life. A few red cell clones called F cells are progeny of a small pool of immature committed erythroid precursors (BFU-e) that retain the ability to produce HbF. Profound erythroid stresses, such as severe hemolytic anemias, bone marrow transplantation, or cancer chemotherapy, cause more of the F-potent BFU-e to be recruited. HbF levels thus tend to rise in some patients with sickle cell anemia or thalassemia. This phenomenon probably explains the ability of hydroxyurea to increase levels of HbF in adult and agents such as butyrate and histone deacetylase inhibitors can also activate fetal globin genes partially after birth.

HEMOGLOBINOPATHIES:

Hemoglobinopathies are disorders affecting the structure, function or production of hemoglobin. These conditions are usually inherited and range in severity from asymptomatic laboratory abnormalities to death in utero. Different forms may present as hemolytic anemia, erythrocytosis, cyanosis or vaso-occlusive stigmata.

Structural hemoglobinopathies occur when mutations alter the amino acid sequence of a globin chain, altering the physiologic properties of the variant hemoglobins and producing the characteristic clinical abnormalities. The most clinically relevant variant hemoglobins polymerize abnormally as in sickle cell anemia or exhibit altered solubility or oxygen-binding affinity.

Thalassemia syndromes arise from mutations that impair production or translation of globin mRNA leading to deficient globin chain biosynthesis. Clinical abnormalities are attributable to the inadequate supply of hemoglobin and imbalances in the production of individual globin chains, leading to premature destruction of erythroblasts and RBC. Thalassemic hemoglobin

variants combine features of thalassemia (e.g., abnormal globin biosynthesis) and of structural hemoglobinopathies (e.g., an abnormal amino acid sequence).

Hereditary persistence of fetal hemoglobin (HPFH) is characterized by synthesis of high levels of fetal hemoglobin in adult life. Acquired hemoglobinopathies include modifications of the hemoglobin molecule by toxins (e.g., acquired methemoglobinemia) and clonal abnormalities of hemoglobin synthesis (e.g., high levels of HbF production in preleukemia and ” thalassemia in myeloproliferative disorders).

There are five major classes of hemoglobinopathies.

Classification of hemoglobinopathies:

CLASS HEMOGLOBINOPATHIES

1 Structural hemoglobinopathies’hemoglobins with altered amino acid sequences that result in deranged function or altered physical or chemical properties

A. Abnormal hemoglobin polymerization’HbS, hemoglobin sickling

B. Altered O2 affinity

1. High affinity’polycythemia

2. Low affinity’cyanosis, pseudoanemia

C. Hemoglobins that oxidize readily

1. Unstable hemoglobins’hemolytic anemia, jaundice

2. M hemoglobins’methemoglobinemia, cyanosis

2 Thalassemias’defective biosynthesis of globin chains

A. ” Thalassemias

B. ” Thalassemias

C. ”, ”, ” Thalassemias

3 Thalassemic hemoglobin variants’structurally abnormal Hb associated with coinherited thalassemic phenotype

A. HbE

B. Hb Constant Spring

C. Hb Lepore

4 Hereditary persistence of fetal hemoglobin’persistence of high levels of HbF into adult life

5 Acquired hemoglobinopathies

A. Methemoglobin due to toxic exposures

B. Sulfhemoglobin due to toxic exposures

C. Carboxyhemoglobin

D. HbH in erythroleukemia

E. Elevated HbF in states of erythroid stress and bone marrow dysplasia

TABLE 127

GENETICS OF SICKLE HEMOGLOBINOPATHY:

This genetic disorder is due to the mutation of a single nucleotide, from a GAG to GTG codon on the coding strand, which is transcribed from the template strand into a GUG codon. Based on genetic code, GAG codon translates to glutamic acid while GUG codon translates to valine amino acid at position 6. This is normally a benign mutation, causing no apparent effects on the secondary, tertiary, or quaternary structures of hemoglobin in conditions of normal oxygen concentration. But under conditions of low oxygen concentration, the deoxy form of hemoglobin exposes a hydrophobic patch on the protein between the E and F helices. The hydrophobic side chain of the valine residue at position 6 of the beta chain in hemoglobin is able to associate with the hydrophobic patch, causing hemoglobin S molecules to aggregate and form fibrous precipitates. It also exhibits changes in solubility and molecular stability.

These properties are responsible for the profound clinical expressions of the sickling syndromes.

HbSS disease or sickle cell anemia (the most common form) – Homozygote for the S globin with usually a severe or moderately severe phenotype and with the shortest survival
HbS/”0 thalassemia – Double heterozygote for HbS and b-0 thalassemia; clinically indistinguishable from sickle cell anemia (SCA)
HbS/”+ thalassemia – Mild-to-moderate severity with variability in different ethnicities
HbSC disease – Double heterozygote for HbS and HbC characterized by moderate clinical severity
HbS/hereditary persistence of fetal Hb (S/HPHP) – Very mild or asymptomatic phenotype
HbS/HbE syndrome – Very rare with a phenotype usually similar to HbS/b+ thalassemia
Rare combinations of HbS with other abnormal hemoglobins such as HbD Los Angeles, G-Philadelphia and HbO Arab

Sickle-cell conditions have an autosomal recessive pattern of inheritance from parents. The types of hemoglobin a person makes in the red blood cells depends on what hemoglobin genes are inherited from her or his parents. If one parent has sickle-cell anaemia and the other has sickle-cell trait, then the child has a 50% chance of having sickle-cell disease and a 50% chance of having sickle-cell trait. When both parents have sickle-cell trait, a child has a 25% chance of sickle-cell disease, 25% do not carry any sickle-cell alleles, and 50% have the heterozygous condition.

The allele responsible for sickle-cell anemia can be found on the short arm of chromosome 11, more specifically 11p15.5. A person who receives the defective gene from both father and mother develops the disease; a person who receives one defective and one healthy allele remains healthy, but can pass on the disease and is known as a carrier or heterozygote. Several sickle syndromes occur as the result of inheritance of HbS from one parent and another hemoglobinopathy, such as ” thalassemia or HbC (”2”2 6 Glu’Lys), from the other parent. The prototype disease, sickle cell anemia, is the homozygous state for HbS.

PATHOPHYSIOLOGY:

The sickle cell syndromes are caused by mutation in the ”-globin gene that changes the sixth amino acid from glutamic acid to valine. HbS (”2”2 6 Glu’Val) polymerizes reversibly when deoxygenated to form a gelatinous network of fibrous polymers that stiffen the RBC membrane, increase viscosity, and cause dehydration due to potassium leakage and calcium influx. These changes also produce the sickle shape. The loss of red blood cell elasticity is central to the pathophysiology of sickle-cell disease. Sickled cells lose the flexibility needed to traverse small capillaries. They possess altered ‘sticky’ membranes that are abnormally adherent to the endothelium of small venules.

Repeated episodes of sickling damage the cell membrane and decrease the cell’s elasticity. These cells fail to return to normal shape when normal oxygen tension is restored. As a consequence, these rigid blood cells are unable to deform as they pass through narrow capillaries, leading to vessel occlusion and ischaemia.

These abnormalities stimulate unpredictable episodes of microvascular vasoocclusion and premature RBC destruction (hemolytic anemia). The rigid adherent cells clog small capillaries and venules, causing tissue ischemia, acute pain, and gradual end-organ damage. This venoocclusive component usually influences the clinical course.

The actual anaemia of the illness is caused by hemolysis which occurs because the spleen destroys the abnormal RBCs detecting the altered shape of red cells. Although the bone marrow attempts to compensate by creating new red cells, it does not match the rate of destruction. Healthy red blood cells typically function for 90’120 days, but sickled cells only last 10’20 days.

Clinical Manifestations of Sickle Cell Anemia:

Patients with sickling syndromes suffer from hemolytic anemia, with hematocrits from 15 to 30%, and significant reticulocytosis. Anemia was once thought to exert protective effects against vasoocclusion by reducing blood viscosity. The role of adhesive reticulocytes in vasoocclusion might account for these paradoxical effects.

Granulocytosis is common. The white count can fluctuate substantially and unpredictably during and between painful crises, infectious episodes, and other intercurrent illnesses.

Vasoocclusion causes protean manifestations and cause episodes of ischemic pain (i.e., painful crises) and ischemic malfunction or frank infarction in the spleen, central nervous system, bones, joints, liver, kidneys and lungs.

Syndromes cause by sickle hemoglobinopathy:

Painful crises: Intermittent episodes of vasoocclusion in connective and musculoskeletal structures produce ischemia manifested by acute pain and tenderness, fever, tachycardia and anxiety. These episodes are recurrent and it is the most common clinical manifestation of sickle cell anemia. Their frequency and severity vary greatly. Pain can develop almost anywhere in the body and may last from a few hours to 2 weeks.

Repeated crises requiring hospitalization (>3 episodes per year) correlate with reduced survival in adult life, suggesting that these episodes are associated with accumulation of chronic end-organ damage. Provocative factors include infection, fever, excessive exercise, anxiety, abrupt changes in temperature, hypoxia, or hypertonic dyes.

Acute chest syndrome: Distinctive manifestation characterized by chest pain, tachypnea, fever, cough, and arterial oxygen desaturation. It can mimic pneumonia, pulmonary emboli, bone marrow infarction and embolism, myocardial ischemia, or lung infarction. Acute chest syndrome is thought to reflect in situ sickling within the lung, producing pain and temporary pulmonary dysfunction. Pulmonary infarction and pneumonia are the most common underlying or concomitant conditions in patients with this syndrome. Repeated episodes of acute chest pain correlate with reduced survival. Acutely, reduction in arterial oxygen saturation is especially ominous because it promotes sickling on a massive scale. Chronic acute or subacute pulmonary crises lead to pulmonary hypertension and cor pulmonale, an increasingly common cause of death in patients.

Aplastic crisis: A serious complication is the aplastic crisis. This is caused by infection with Parvovirus B-19 (B19V). This virus causes fifth disease, a normally benign childhood disorder associated with fever, malaise, and a mild rash. This virus infects RBC progenitors in bone marrow, resulting in impaired cell division for a few days. Healthy people experience, at most, a slight drop in hematocrit, since the half-life of normal erythrocytes in the circulation is 40-60 days. In people with SCD however, the RBC lifespan is greatly shortened (usually 10-20 days), and a very rapid drop in Hb occurs. The condition is self-limited, with bone marrow recovery occurring in 7-10 days, followed by brisk reticulocytosis.

CNS sickle vasculopathy: Chronic subacute central nervous system damage in the absence of an overt stroke is a distressingly common phenomenon beginning in early childhood. Stroke is especially common in children and may reoccur, but is less common in adults and is often hemorrhagic. Stroke affects 30% of children and 11% of patients by 20 years. It is usually ischemic in children and hemorrhagic in adults.

Modern functional imaging techniques have indicated circulatory dysfunction of the CNS; these changes correlate with display of cognitive and behavioral abnormalities in children and young adults. It is important to be aware of these changes because they can complicate clinical management or be misinterpreted as ‘difficult patient’ behaviors.

Splenic sequestration crisis: The spleen enlarges in the latter part of the first year of life in children with SCD. Occasionally, the spleen undergoes a sudden very painful enlargement due to pooling of large numbers of sickled cells. This phenomenon is known as splenic sequestration crisis. Over time, the spleen becomes fibrotic and shrinks causing autosplenectomy. In cases of SC trait, the spleenomegaly may persist upto adulthood due to ongoing hemolysis under the influence of persistent fetal hemoglobin.

Acute venous obstruction of the spleen a rare occurrence in early childhood, may require emergency transfusion and/or splenectomy to prevent trapping of the entire arterial output in the obstructed spleen. Repeated microinfarction can destroy tissues having microvascular beds, thus, splenic function is frequently lost within the first 18’36 months of life, causing susceptibility to infection, particularly by pneumococci.

Infections: Life-threatening bacterial infections are a major cause of morbidity and mortality in patients with SCD. Recurrent vaso-occlusion induces splenic infarctions and consequent autosplenectomy, predisposing to severe infections with encapsulated organisms (eg, Haemophilus influenzae, Streptococcus pneumoniae).

Cholelithiasis: Cholelithiasis is common in children with SCD as chronic hemolysis with hyperbilirubinemia is associated with the formation of bile stones. Cholelithiasis may be asymptomatic or result in acute cholecystitis, requiring surgical intervention. The liver may also become involved. Cholecystitis or common bile duct obstruction can occur. Child with cholecystitis presents with right upper quadrant pain, especially if associated with fatty food. Common bile duct blockage suspected when a child presents with right upper quadrant pain and dramatically elevated conjugated hyperbilirubinemia.

Leg ulcers: Leg ulcers are a chronic painful problem. They result from minor injury to the area around the malleoli. Because of relatively poor circulation, compounded by sickling and microinfarcts, healing is delayed and infection occurs frequently.

Eye manifestation: Occlusion of retinal vessels can produce hemorrhage, neovascularization, and eventual detachments.

Renal manifestation: Renal menifestations include impaired urinary concentrating ability, defects of urinary acidification, defects of potassium excretion and progressive decrease in glome”rular filtration rate with advancing age. Recurrent hematuria, proteinuria, renal papillary necrosis and end-stage renal disease (ESRD) are all well recognized.

Renal papillary necrosis invariably produces isosthenuria. More widespread renal necrosis leads to renal failure in adults, a common late cause of death.

Bone manifestation: Bone and joint ischemia can lead to aseptic necrosis, common in the femoral or humeral heads; chronic arthropathy; and unusual susceptibility to osteomyelitis, which may be caused by organisms, such as Salmonella, rarely encountered in other settings.

-The hand-foot syndrome is caused by painful infarcts of the digits and dactylitis.

Pregnancy in SCD: Pregnancy represents a special area of concern. The high rate of fetal loss is due to spontaneous abortion. Placenta previa and abruption are common due to hypoxia and placental infarction. At birth, the infant often is premature or has low birth weight.

Other features: Particularly painful complication in males is priapism, due to infarction of the penile venous outflow tracts; permanent impotence may also occur. Chronic lower leg ulcers probably arise from ischemia and superinfection in the distal circulation.

Sickle cell syndromes are remarkable for their clinical heterogeneity. Some patients remain virtually asymptomatic into or even through adult life, while others suffer repeated crises requiring hospitalization from early childhood. Patients with sickle thalassemia and sickle-HbE tend to have similar, slightly milder symptoms, perhaps because of the bad effects of production of other hemoglobins within the RBC.

Clinical Manifestations of Sickle Cell Trait:

Sickle cell trait is often asymptomatic. Anemia and painful crises are rare. An uncommon but highly distinctive symptom is painless hematuria often occurring in adolescent males, probably due to papillary necrosis. Isosthenuria is a more common manifestation of the same process. Sloughing of papillae with urethral obstruction has been also seen, due to massive sickling or sudden death due to exposure to high altitudes or extremes of exercise and dehydration.

Pulmonary hypertension in sickle hemoglobinopathy:

In recent years, PAH a proliferative vascular disease of the lung, has been recognized as a major complication and an independent correlate with death among adults with SCD. Pulmonary hypertension is defined as a mean pulmonary artery pressure >25mmHg, and includes pulmonary artery hypertension, pulmonary venous hypertension or a combination of both. The etiology is multifactorial, including hemolysis, hypoxemia, thromboembolism, chronic high CO, and chronic liver disease. Clinical presentation is characterized by symptoms of dyspnea, chest pain, and syncope. It is important to note that high cardiac output can also elevate pulmonary artery pressure adding to the complex and multifactorial pathophysiology of PHT in sickle cell disease. Thus, if left untreated, the disease carries a high mortality rate, with the most common cause of death being decompensated right heart failure.

Prevalance and prognosis:

Echocardiographic screening studies have suggested that the prevalence of hemoglobinopathy-associated PAH is much higher than previously known. In SCD, approximately one-third of adult patients have an elevated tricuspid regurgitant jet velocity (TRV) of 2.5 m/s or higher, a threshold that correlates in right heart catheterization studies to a pulmonary artery systolic pressure of at least 30 mm Hg. Even though this threshold represents quite mild pulmonary hypertension, SCD patients with TRV above this threshold have a 9- to 10- fold higher risk for early mortality than those with a lower TRV. It appears that the baseline compromised oxygen delivery and co-morbid organ dysfunction of SCD diminishes the physiological reserve to tolerate even modest pulmonary arterial pressures.

Pathogenesis:

Different hemolytic anemias seem to involve common mechanisms for development of PAH. These processes probably include hemolysis, causing endothelial dysfunction, oxidative and inflammatory stress, chronic hypoxemia, chronic thromboembolism, chronic liver disease, iron overload, and asplenia.

Hemolysis results in the release of hemoglobin into plasma, where it reacts and consumes nitric oxide (NO) causing a state of resistance to NO-dependent vasodilatory effects. Hemolysis also causes the release of arginase into plasma, which decreases the concentration of arginine, substrate for the synthesis of NO. Other effects associated with hemolysis that can contribute to the pathogenesis of pulmonary hypertension are increased cellular expression of endothelin, production of free radicals, platelet activation, and increased expression of endothelial adhesion mediating molecules.

Previous studies suggest that splenectomy (surgical or functional) is a risk factor for the development of pulmonary hypertension, especially in patients with hemolytic anemias. It is speculated that the loss of the spleen increases the circulation of platelet mediators and senescent erythrocytes that result in platelet activation (promoting endothelial adhesion and thrombosis in the pulmonary vascular bed), and possibly stimulates the increase in the intravascular hemolysis rate.

Vasoconstriction, vascular proliferation, thrombosis, and inflammation appear to underlie the development of PAH. In long-standing PH, intimal proliferation and fibrosis, medial hypertrophy, and in situ thrombosis characterize the pathologic findings in the pulmonary vasculature. Vascular remodeling at earlier stages may be confined to the small pulmonary arteries. As the disease advances, intimal proliferation and pathologic remodeling progress, resulting in decreased compliance and increased elastance of the pulmonary vasculature.

The outcome is a progressive increase in the right ventricular afterload or total pulmonary vascular resistance (PVR) and, thus, right ventricular work.

Chronic pulmonary involvement due to repeated episodes of acute thoracic syndrome can lead to pulmonary fibrosis and chronic hypoxemia, which can eventually lead to the development of pulmonary hypertension.

Coagulation disorders, such as low levels of protein C, low levels of protein S, high levels of D-dimers and increased activity of the tissue factor, occur in patients with sickle cell anemia.This hypercoagulable state can cause thrombosis in situ or pulmonary thromboembolism, which occurs in patients with sickle cell anemia and other hemolytic anemias.

Clinical manifestations:

On examination, there may be evidence of right ventricular failure with elevated jugular venous pressure, lower extremity edema, and ascites. The cardiovascular examination may reveal an accentuated P2 component of the second heart sound, a right-sided S3 or S4, and a holosystolic tricuspid regurgitant murmur. It is also important to seek signs of the diseases that are often concurrent with PH: clubbing may be seen in some chronic lung diseases, sclerodactyly and telangiectasia may signify scleroderma, and crackles and systemic hypertension may be clues to left-sided systolic or diastolic heart failure.

Diagnostic evaluation:

The diagnosis of pulmonary hypertension in patients with sickle cell anemia is typically difficult. Dyspnea on exertion, the symptom most typically associated with pulmonary hypertension, is also very common in anemic patients. Other disorders with similar symptomatology, such as left heart failure or pulmonary fibrosis, frequently occur in patients with sickle cell anemia. Patients with pulmonary hypertension are often older, have higher systemic blood pressure, more severe hemolytic anemia, lower peripheral oxygen saturation, worse renal function, impaired liver function and a higher number of red blood cell transfusions than do patients with sickle cell anemia and normal pulmonary pressure.

The diagnostic evaluation of patients with hemoglobinopathies and suspected of having pulmonary hypertension should follow the same guidelines established for the investigation of patients with other causes of pulmonary hypertension.

Echocardiography: Echocardiography is important for the diagnosis of PAH and often essential for determining the cause. All forms of PAH may demonstrate a hypertrophied and dilated right ventricle with elevated estimated pulmonary artery systolic pressure. Important additional information can be obtained about specific etiologies such as valvular disease, left ventricular systolic and diastolic function, intracardiac shunts, and other cardiac diseases.

An echocardiogram is a screening test, whereas invasive hemodynamic monitoring is the gold standard for diagnosis and assessment of disease severity.

Pulmonary artery (PA) systolic pressure (PASP) can be estimated by Doppler echocardiography, utilizing the tricuspid regurgitant velocity (TRV). Increased TRV is estimated to be present in approximately one-third of adults with SCD and is associated with early mortality. In the more severe cases, increased TRV is associated with histopathologic changes similar to atherosclerosis such as plexogenic changes and hyperplasia of the pulmonary arterial intima and media.

The cardiopulmonary exercise test (CPET): This test may help to identify a true physiologic limitation as well as differentiate between cardiac and pulmonary causes of dyspnea but test can only be performed if patient has reasonable functional capacity. If this test is normal, there is no indication for a right heart catheterization.

Right Heart Catheterization: If patient has cardiovascular limitation to exercise, a right heart catheterization should be inserted. Right heart catheterization with pulmonary vasodilator testing remains the gold standard both to establish the diagnosis of PH and to enable selection of appropriate medical therapy. The definition of precapillary PH or PAH requires (1) an increased mean pulmonary artery pressure (mPAP ’25 mmHg); (2) a pulmonary capillary wedge pressure (PCWP), left atrial pressure, or left ventricular end-diastolic pressure ’15 mmHg; and (3) PVR >3 Wood units. Postcapillary PH is differentiated from precapillary PH by a PCWP of ’15 mmHg; this is further differentiated into passive, based on a transpulmonary gradient <12 mmHg, or reactive, based on a transpulmonary gradient >12 mmHg and an increased PVR. In either case, the CO may be normal or reduced. If the echocardiogram or cardiopulmonary exercise test (CPET) suggests PH and the diagnosis is confirmed by catheterization.

Chest imaging and lung function tests: These are essential because lung disease is an important cause of PH. A sign of PH that may be evident on chest x-ray include enlargement of the central pulmonary arteries associated with ‘vascular pruning,’ a relative paucity of peripheral vessels. Cardiomegaly, with specific evidence of right atrial and ventricular enlargement may present. The chest x-ray may also demonstrate significant interstitial lung disease or suggest hyperinflation from obstructive lung disease, which may be the underlying cause or contributor to the development of PH.

High-resolution computed tomography (CT): Classic findings of PH on CT include those found on chest x-ray: enlarged pulmonary arteries, peripheral pruning of the small vessels, and enlarged right ventricle and atrium. High-resolution CT may also show signs of venous congestion including centrilobular ground-glass infiltrate and thickened septal lines. In the absence of left heart disease, these findings suggest pulmonary veno-occlusive disease, a rare cause of PAH that can be quite challenging to diagnose.

CT angiograms: Commonly used to evaluate acute thromboembolic disease and have demonstrated excellent sensitivity and specificity for that purpose.

Ventilation-perfusion Ratio: Scanning done for screening because of its high sensitivity and its role in qualifying patients for surgical intervention. Negative ratio virtually rules out CTEPH, some cases may be missed through the use of CT angiograms.

Pulmonary function test: Isolated reduction in DLco is the classic finding in PAH, results of pulmonary function tests may also suggest restrictive or obstructive lung diseases as the cause of dyspnea or PH.

Evaluation of symptoms and functional capacity (6 Min walk test): Although the 6-minute walk test has not been validated in patients with hemoglobinopathies, preliminary data suggest that this test correlates well with maximal oxygen uptake and with the severity of pulmonary hypertension in patients with sickle cell anemia. In addition, in these patients, the distance covered on the 6-minute walk test significantly improves with the treatment of pulmonary hypertension, which suggests that it can be used in this population.

DYSLIPIDEMIA IN SICKLE HEMOGLOBINOPATHY:

Disorders of lipoprotein metabolism are known as ‘dyslipidemias.’ Dyslipidemias are generally characterized clinically by increased plasma levels of cholesterol, triglycerides, or both, accompanied by reduced levels of HDL cholesterol. Mostly all patients with dyslipidemia are at increased risk for ASCVD, the primary reason for making the diagnosis, as intervention may reduce this risk. Patients with elevated levels of triglycerides may be at risk for acute pancreatitis and require intervention to reduce this risk.

Hundreds of proteins affect lipoprotein metabolism and may interact to produce dyslipidemia in an individual patient, there are a limited number of discrete ‘nodes’ that regulate lipoprotein metabolism. These include:

(1) assembly and secretion of triglyceriderich VLDLs by the liver;

(2) lipolysis of triglyceride-rich lipoproteins by LPL;

(3) receptor-mediated uptake of apoB-containing lipoproteins by the liver;

(4) cellular cholesterol metabolism in the hepatocyte and the enterocyte; and

(5) neutral lipid transfer and phospholipid hydrolysis in the plasma.

Hypocholesterolemia and, to a lesser extent, hypertriglyceridemia have been documented in SCD cohorts worldwide for over 40 years, yet the mechanistic basis and physiological ramifications of these altered lipid levels have yet to be fully elucidated. Cholesterol (TC, HDL-C and LDL-C) levels decreased and triglyceride levels increased in relation to severity of anemia. While not true for cholesterol levels, triglyceride levels show a strong correlation with markers of severity of hemolysis, endothelial activation, and pulmonary hypertension.

Decreased TC and LDL-C in SCD has been documented in virtually every study that examined lipids in SCD adults (el-Hazmi, et al 1987, el-Hazmi, et al 1995, Marzouki and Khoja 2003, Sasaki, et al 1983, Shores, et al 2003, Stone, et al 1990, Westerman 1975),

with slightly more variable results in SCD children. Although it might be hypothesized that SCD hypocholesterolemia results from increased cholesterol utilization during the increased erythropoiesis of SCD, cholesterol is largely conserved through the enterohepatic circulation, at least in healthy individuals, and biogenesis of new RBC membranes would likely use recycled cholesterol from the hemolyzed RBCs. Westerman demonstrated that hypocholesterolemia was not due merely to increased RBC synthesis by showing that it is present in both hemolytic and non-hemolytic anemia (Westerman 1975). He also reports that serum cholesterol was proportional to the hematocrit, suggesting serum cholesterol may be in equilibrium with the cholesterol reservoir of the total red cell mass (Westerman 1975). Consistent with such equilibration, tritiated cholesterol incorporated into sickled erythrocytes is rapidly exchanged with plasma lipoproteins (Ngogang, et al 1989). Thus, low plasma cholesterol appears to be a consequence of anemia itself rather than increased RBC production (Westerman 1975).

Total cholesterol, in particular LDL-C, has a well-established role in atherosclerosis. The low levels of LDL-C in SCD are consistent with the low levels of total cholesterol and the virtual absence of atherosclerosis among SCD patients. Decreased HDL-C in SCD has also been documented in some previous studies(Sasaki, et al 1983, Stone, et al 1990). As in lipid studies for other disorders in which HDL-C is variably low, potential reasons for inconsistencies between studies include differences in age, diet, weight, smoking, gender, small sample sizes, different ranges of disease severity, and other diseases and treatments (Choy and Sattar 2009, Gotto A 2003). Decreased HDL-C and apoA-I is a known risk factor for endothelial dysfunction in the general population and in SCD, a potential contributor in SCD to PH, although the latter effect size might be small (Yuditskaya, et al 2009).

In addition, triglyceride levels have been reported to increase during crisis. Why is increased triglyceride but not cholesterol in serum associated with vascular dysfunction and pulmonary hypertension? Studies in atherosclerosis have firmly established that lipolysis of oxidized LDL in particular results in vascular dysfunction. Lipolysis of triglycerides present in triglyceride-rich lipoproteins releases neutral and oxidized free fatty acids that induce endothelial cell inflammation (Wang, et al 2009). Many oxidized fatty acids are more damaging to the endothelium than their non-oxidized precursors; for example, 13-hydroxy octadecadienoic acid (13-HODE) is a more potent inducer of ROS activity in HAECs than linoleate, the nonoxidized precursor of 13-HODE(Wang, et al 2009). Lipolytic generation of arachidonic acid, eicosanoids, and inflammatory molecules leading to vascular dysfunction is a well-established phenomenon (Boyanovsky and Webb 2009). Although LDL-C levels are decreased in SCD patients, LDL from SCD patients is

more susceptible to oxidation and cytotoxicity to endothelium (Belcher, et al 1999) and an unfavorable plasma fatty acid composition has been associated with clinical severity of SCD (Ren, et al 2006). Lipolysis of phospholipids in lipoproteins or cell membranes by secretory phospholipase A2 (sPLA2) family members releases similarly harmful fatty acids, particularly in an oxidative environment (Boyanovsky and Webb 2009 ) and in fact selective PLA2 inhibitors are currently under development as potential therapeutic agents for atherosclerotic cardiovascular disease(Rosenson 2009). Finally, sPLA2 activity has been linked to lung disease in SCD. sPLA2 is elevated in acute chest syndrome of SCD and in conjunction with fever preliminarily appears to be a good biomarker for diagnosis, prediction and prevention of acute chest syndrome(Styles, et al 2000). The deleterious effects of phospholipid hydrolysis on lung vasculature predicts similar deleterious effects of triglyceride hydrolysis, particularly in the oxidatively stressed environment of SCD.

Elevated triglycerides have been documented in autoimmune inflammatory diseases with increased risk of vascular dysfunction and pulmonary hypertension, including systemic lupus erythematosus, scleroderma, rheumatoid arthritis, and mixed connective tissue diseases(Choy and Sattar 2009, Galie, et al 2005). In fact, triglyceride concentration is a stronger predictor of stroke than LDL-C or TC(Amarenco and Labreuche 2009). Even in healthy control subjects, a high-fat meal induces oxidative stress and inflammation, resulting in endothelial dysfunction and vasoconstriction(O’Keefe, et al 2008). Perhaps having high levels of plasma triglycerides promotes vascular dysfunction, with the clinical outcome of vasculopathy mainly in the coronary and cerebral arteries in the general population, and with more targeting to the pulmonary vascular bed in SCD and autoimmune diseases.

The mechanisms leading to hypocholesterolemia and hypertriglyceridemia in plasma or serum of SCD patients are not completely understood. In normal individuals, triglyceride levels are determined to a significant degree by body weight, diet and physical exercise, as well as concurrent diabetes. Diet and physical exercise very likely impact body weight and triglyceride levels in SCD patients also. These findings indicate that standard risk factors for high triglycerides are also relevant to SCD patients. Mechanisms of SCD-specific risk factors for elevated plasma triglycerides are not as clear. RBCs do not have de novo lipid synthesis (Kuypers 2008). In SCD the rate of triglyceride synthesis from glycerol is elevated up to 4-fold in sickled reticulocytes (Lane, et al 1976), but SCD patients have defects in post absorptive plasma homeostasis of fatty acids (Buchowski, et al 2007). Lipoproteins and albumin in plasma can contribute fatty acids to red blood cells for incorporation into membrane phospholipids (Kuypers 2008), but RBC membranes are not triglyceride-rich and contributions of RBCs to plasma triglyceride levels have not been described. Interestingly, chronic intermittent or stable hypoxia just by exposure to high altitudes, with no underlying disease, is sufficient to increase triglyceride levels in healthy subjects (Siques, et al 2007). Thus, it has also been suggested that hypoxia in SCD may contribute at least partially to the observed increase in serum triglyceride. Finally, there is a known link of low cholesterol and increased triglycerides that occurs in any primate acute phase response, such as infection and inflammation (Khovidhunkit, et al 2004). Perhaps because of their chronic hemolysis, SCD patients have a low level of acute phase response, which is also consistent with the other inflammatory markers. Further studies are required to elucidate the mechanisms leading to hypocholesterolemia and hypertriglyceridemia in SCD.

Pulmonary hypertension is a disease of the vasculature that shows many similarities with the vascular dysfunction that occurs in coronary atherosclerosis (Kato and Gladwin 2008). The similarities and differences are: They both have proliferative vascular smooth muscle cells ‘ just in different vascular beds. They both have an impaired nitric oxide axis, increased oxidant stress, and vascular dysfunction. Most importantly, serum triglyceride levels, previously linked to vascular dysfunction, are definitely shown to correlate with NT-proBNP and TRV and thus, with pulmonary hypertension. Moreover, triglyceride levels are predictive of TRV independent of systolic blood pressure, low transferrin or increased lactate dehydrogenase.

PAH in SCD is also characterized by oxidant stress but in SCD patients plasma total cholesterol (TC) and low density lipoprotein cholesterol (LDL-C) are low. There have been some reports of low HDL cholesterol (HDL-C)17,18 and increased triglyceride in SCD patients ‘ features widely recognized as important contributory factors in cardiovascular disease. These findings and the therapeutic potential to modulate serum lipids with several commonly used drugs prompted us to investigate in greater detail the serum lipid profile in patients with sickle hemoglobinopathy (SH) coming to our hospital and its possible relationship to vasculopathic complications such as PAH.

essay-2016-09-27-000BaY

Gender and Caste – The Cry for Identity of Women

INTRODUCTION

‘Bodies are just not biological phenomena but a complex social creation onto which meanings have been variously composed and imposed according to time and space’. These social creations differentiate the two biological personalities into Man and Woman and meanings to their qualities are imposed on the basis of gender which defines them as He and She.

The question then arises a woman ‘ who is she? According to me, a woman is the one who is empowered, enlightened, enthusiastic and energetic. A woman is all about sharing. She is an exceptional personality who encourages and embraces. If a woman is considered to be a mark of patience and courage then why even today there is a lack of identity in her personality. She is subordinated to man and often discriminated on gender basis.

The entire life of a woman revolves around the patriarchal existence as she is dominated by her father in the childhood, in the other phase of her life she is dominated by her husband and in the later phase by her son, which gives no space to her own independence.

The psychological and physical identity of a woman is defined through the role and control of men: the terrible trait of father-husband-son. The boundary of women is always restrained by male dominance. Gender discrimination is not only a historical concept but it still has its existence in the contemporary Indian Society.

Indian society in every part of its existence experiences the ferocious gender conflict which is everyday projected in the daily newspapers, news channels or even walking on the streets of Indian society. The horror of patriarchal domination exists in every corner of the Indian society. The role of Indian women has always been declining over the centuries.

Turning the pages of history, in the pre-Aryan India God was female and life was being represented in the form of mother Earth. People worshipped the mother Goddess for fertility symbols. The Shakti cult of Hinduism says women as the source and embodiment of cosmic power and energy. Woman power can also be shown through Goddess Durga who lured her husband Shiva from asceticism.

The religious and social condition abruptly changed when the Aryan Brahmins eliminated the Shakti cult and power was given in the hands of male group. They considered the male deities as the husbands of the female goddess providing the dominance in the hands of the male. Marriage was involvement of male control over female sexuality. Even the identity of mother goddess was dominated by the male gods. As Mrinal Pande writes, ‘to control women, it becomes necessary to control the womb and so Hinduism, Judaism, Islam and Christianity have all Stipulated, at one time or another, that the whole area of reproductive activity must be firmly monitored by law and lawmakers’ .

The issue of identity crisis for a woman

The identity of a woman is erased as she becomes a mere reproductive machine ruled and dominated by male laws. From the time she takes birth she is taught that one day, she has to get married and go to her husband’s house. Neither thus she belongs to her own house nor to her husband’s house leaving a mark on her identity. The Vedic times, however proved to be a boon in the lives of women as they enjoyed freedom of choice in aspect of husbands and could marry at mature age. Widows could remarry and women could divorce.

The segregation of women continued to raise the same question of identity as in the Chandogya Upanishad, a religious text of the pre-Buddhist era, contains a prayer of spiritual aspirants which says ‘May I never, ever, enter that reddish, white, toothless, slippery and slimy yoni of the woman’. During this time control over women included reclusion and exclusion and they were even denied education. Women and shudras were treated as the minority class in the society. Rights and privileges given to women were cancelled and girls were married at a very early age. Caste structure also played a great role as women were now discriminated within their own caste on gender basis.

According to Liddle, women were controlled under two aspects: firstly, they were disinherited from ancestral property, economy and were expected to remain under the domestic sphere known as purdah. The second aspect was the control of men over female sexuality. The death rituals of the family members were performed by the sons and no daughter had the right to fire their parent funeral.

A stifling patriarchal shadow hangs over the lives of ladies all through India. From all areas, ranks and classes of society, ladies are casualty of its oppressive, controlling impacts. Those subjected to the heaviest weight of separation are from the Dalit or “Planned Castes”, referred to in less liberal vote based times as the “Untouchables”. The name may have been banned however pervasive negative mentalities of psyche stay, as do the amazing levels of misuse and subjugation experienced by Dalit ladies. They encounter different levels of segregation and misuse, a lot of which is primitive, debasing, horrifyingly vicious and absolutely obtuse. The divisive position framework ‘ in operation all through India, “Old” and “New” ‘ together with biased sexual orientation demeanors, sits at the heart of the colossal human rights manhandle experienced by Dalit or “outcaste” ladies.

The lower positions are isolated from different individuals from the group, precluded from eating with “higher” standings, from utilizing town wells and lakes, entering town sanctuaries and higher rank houses, wearing shoes or notwithstanding holding umbrellas before higher stations; they are compelled to sit alone and use distinctive porcelain in eateries, restricted from cycling a bike inside their town and are made to cover their dead in a different cemetery. They every now and again confront ousting from their territory by higher “overwhelming” stations, compelling them to live on the edges of towns frequently on fruitless area.

This plenty of preference add up to politically-sanctioned racial segregation, and the time has come ‘ long past due ‘ that the “popularity based” legislature of India authorized existing enactment and cleansed the nation of the guiltiness of position and sexual orientation based separation and abuse.

The strategic maneuver of patriarchy soaks each range of Indian culture and offers ascend to an assortment of unfair practices, for example, female child murder, victimization young ladies and shares related passing. It is a noteworthy reason for misuse and manhandle of ladies, with a lot of sexual brutality being executed by men in positions of force. These reach from higher position men damaging lower rank ladies, particularly Dalit; policemen abusing ladies from poor family units; and military men mishandling Dalit and Adivasi ladies in rebellion states, for example, Kashmir, Chhattisgarh, Jharkhand, Orissa and Manipur. Security faculty are ensured by the generally condemned Armed Forces Special Powers Act, which gifts exemption to police and individuals from the military completing criminal demonstrations of assault and to be sure murder; it was proclaimed by the British in 1942 as a crisis measure, to stifle the Quit India Movement. It is an unreasonable law, which needs canceling.

In December 2012 the intolerable posse assault and mutilation of a 23-year-old paramedical understudy in New Delhi, who consequently kicked the bucket from her wounds, collected overall media consideration, putting a transient focus on the risks, persecution and shocking treatment ladies in India confront each day. Assault is endemic in the nation. With most instances of assault going unreported and numerous being released by police, the genuine figure could be 10 times this. The ladies most at danger of misuse are Dalit: the NCRB gauges that more than four Dalit-ladies are assaulted each day in India. An UN study uncovers that “the lion’s share of Dalit ladies report having confronted one or more episodes of verbal misuse (62.4 for every penny), physical attack (54.8 for each penny), inappropriate behavior and strike (46.8 for each penny), aggressive behavior at home (43.0 for every penny) and assault (23.2 for every penny)”. They are subjected to “assault, attack, seizing, snatching, crime physical and mental torment, shameless movement and sexual misuse.”

The UN found that extensive numbers were deterred from looking for equity: in 17 for each penny of occasions of savagery (counting assault) casualties were blocked from reporting the wrongdoing by the police; in more than 25 for each penny of cases the group ceased ladies recording grumblings; and in more than 40 for each penny ladies “did not endeavor to get legitimate or group solutions for the brutality basically out of apprehension of the culprits or social disrespect if (sexual) viciousness was uncovered”. In just 1 for every penny of recorded cases were the culprits sentenced. What “takes after episodes of viciousness”, the UN found, is “a resonating hush”. The impact with regards to Dalit ladies particularly, however not solely, “is the creation and upkeep of a society of brutality, quiet and exemption”.

Class discrimination faced by women of contemporary time

The Indian constitution clarifies the “rule of non-separation on the premise of rank or sexual orientation”. It promises the “privilege to life and to security of life”. Article 46 particularly “shields Dalit from social unfairness and all types of abuse”. Add to this the imperative Scheduled Castes and Tribes (Prevention of Atrocities) Act of 1989, and an around outfitted administrative armed force is framed. Notwithstanding, in view of “low levels of execution”, the UN expresses, “the procurements that secure ladies’ rights must be viewed as vacant of importance”. It is a commonplace Indian story: legal impassion (and cost, absence of access to lawful representation, interminable formality and obstructive staff), police defilement, and government arrangement, in addition to media lack of interest bringing on the significant hindrances to equity and the perception and implementation of the law.

Not at all like white collar class young ladies, Dalit assault casualties (whose numbers are developing) once in a while get the consideration of the rank/class-cognizant urban-driven media, whose essential concern is to advance a Bollywood gleaming, open-for-business picture of the nation.

A 20-year-old Dalit lady from the Santali tribal gathering in West Bengal was group assaulted, supposedly “on the requests of town senior citizens who questioned her relationship (which had been going ahead in mystery for a long time) with a man from an adjacent town in the Bird hum locale”. The savage occurrence happened while, as indicated by a BBC report, the man went to the lady’s home’ with the proposition of marriage, villagers spotted him and sorted out a kangaroo court. Amid the “procedures” the couple were made to sit with situation is anything but hopeful’ the headman of the lady’s town fined the couple 25,000 rupees (400 US dollars; GBP 240) for “the wrongdoing of experiencing passionate feelings for. The man paid, however the lady’s family were not able pay. Subsequently, the “headman” and 12 of his companions more than once assaulted her. Brutality, abuse and prohibition are utilized to keep Dalit ladies in a position of subordination and to keep up the patriarchal grasp on force all through Indian culture.

The urban areas are unsafe spots for ladies, yet it is in the farmland, where a great many people live (70 for each penny) that the best levels of misuse happen. Numerous living in country zones live in amazing neediness (800 million individuals in India live on under 2.50 dollars a day), with practically no entrance to medicinal services, poor instruction and horrifying or non-existent sanitation. It is a world separated from law based Delhi, or Westernized Mumbai: water, power, majority rule government and the tenet of law are yet to venture into the lives of the ladies in India’s towns, which home, Mahatma Gandhi broadly proclaimed, to the spirit of the nation.

Nothing unexpected, then, that following two many years of monetary development, India winds up moping 136th (of 186 nations) in the (sex fairness balanced) United Nations Human Development record’ Harsh thoughts of sexual orientation imbalance

Indian culture is isolated in numerous ways: position/class, sexual orientation, riches and neediness, and religion. Dug in patriarchy and sex divisions, which esteem young men over young ladies and keep men and ladies and young men and young ladies separated, join with tyke marriage to add to the formation of a general public in which sexual misuse and abuse of ladies, especially Dalit ladies, is an adequate piece of ordinary life.

Sociologically and mentally molded into division, schoolchildren separate themselves along sex lines; in numerous territories ladies sit on one side of transports, men another; unique ladies just carriages have been introduced on the Delhi and Mumbai metro, acquainted with shield ladies from inappropriate behavior or “eve teasing” as it is conversationally known. Such wellbeing measures, while being invited by ladies and ladies’ gatherings, don’t manage the basic reasons for misuse, and as it were may promote kindle them.

Assault, sexual brutality, attack and provocation are overflowing, at the same time, with the special case maybe of the Bollywood Mumbai set, sex is a forbidden subject. A survey by India Today directed in 2011 found that 25 for every penny of individuals had no complaint to sex before marriage, giving it’s not in their family.

Sociological partition energizes sex divisions, bolsters biased generalizations and feeds sexual constraint, which numerous ladies’ association trust represents the high rate of sexual viciousness. A recent report, did by the International Center for Research on Women, of men’s mentalities in India towards ladies created some startling measurements: one in four conceded having “utilized sexual brutality (against an accomplice or against any lady)”, one in five reported utilizing “sexual savagery against a stable [female] accomplice”. Half of men would prefer not to see sexual orientation correspondence, 80 for each penny respect evolving nappies, nourishing and washing youngsters to be “ladies’ work”, and a minor 16 for every penny have influence in family obligations. Added to these repressing states of mind of psyche, homophobia is the standard, with 92 for every penny admitting they would be embarrassed to have a gay companion, or even be in the region of a gay man.

With everything taken into account, India is cursed by an inventory of Victorian sex generalizations, fuelled by a position framework intended to oppress, which trap both men and ladies into molded cells of detachment where ruinous thoughts of sex are permitted to age, bringing about blasts of sexual brutality, misuse and man handle. Investigations of position have started to draw in with issues of rights, assets, and acknowledgment/representation, showing the degree to which position must be perceived as key to the account of India’s political advancement. For instance, researchers are getting to be progressively mindful of the degree to which radical masterminds.

Ambedkar, Periyar, and Phule requested the acknowledgment of histories of misuse, custom derision, and political disappointment as constituting the lives of the lower-ranks, even all things considered histories additionally framed the loaded past from which get away was looked for.

Researchers have indicated Mandal as the developmental minute in the “new” national governmental issues of station, particularly for having radicalized dalitbahujans in the politically critical states of the Hindi belt. Hence Mandal may be an advantageous, despite the fact that overdetermined vantage-indicate from which break down the state’s conflicting and incapable interest in the talk of lower-rank qualification, tossing open to examination the political practices and philosophies that enliven parliamentary vote based system in India as a recorded arrangement.

Tharu and Niranjana (1996) have noticed the perceivability of station also, sexual orientation issues in the post-Mandal connection and depict it as a opposing arrangement. Case in point, there were battles by upper-station ladies to challenge reservations by comprehension them as concessions, and the extensive scale investment of school going ladies in the counter Mandal tumult with a specific end goal to claim meet treatment instead of reservations in battles for sexual orientation equality. On the other hand, lower-position male declaration regularly focused on uppercaste ladies, making an uncertain problem for upper-rank women’s activists who had been professional Mandal. The relationship between standing and sexual orientation never appeared to be more cumbersome. The interest for bookings for ladies (and for further reservations for dalit ladies and ladies from the Backward Class and Other Backward Communities) can likewise be seen as an outgrowth of a restored endeavor to address rank and sex issues from inside the landscape of governmental issues. It may likewise demonstrate the inadequacy of concentrating exclusively on sexual orientation in assembling a measurable “arrangement” to the political issue of perceivability and representation.

Rising out of the 33 for each penny bookings for ladies in nearby Panchayat, and plainly inconsistent with the Mandal dissents that compared reservations with ideas of inadequacy, the late requests for reservations is a stamped move far from the verifiable doubt of bookings for ladies. As Mary John has contended, ladies’ powerlessness must be seen with regards to the political removals t h at imprint the emergence of minorities before the state.

The subject of political representation and the plan of gendered defenselessness are associated issues. As I have contended in my exposition incorporated into this volume, such defenselessness is the characteristic of the gendered subject’s peculiarity. It is that type of harmed presence that brings her inside the edge of political readability as various’yet qualified’for general types of review. All things considered, it is basic to political talks of rights and acknowledgment.

Political requests for bookings for ladies’and for lowercaste ladies’supplement academic endeavors to comprehend the profound cleavages between ladies of various positions that contemporary occasions, for example, Mandal or the Hindutva development have uncovered. In investigating the difficulties postured by Mandal to ruling originations of mainstream selfhood, Vivek Dhareshwar indicated conversions between perusing for and recouping the nearness of position as a hushed open talk in contemporary India, and comparable practices by women’s activists who had investigated the unacknowledged weight of gendered personality.

Dhareshwar recommended that scholars of station and scholars of sex may consider elective affinities in their strategies for examination, and deliberately grasp their trashed personalities (position, sexual orientation) with a specific end goal to attract open thoughtfulness regarding them as political characters. Dhareshwar contended this would demonstrate the degree to which secularism had been kept up as another type of upper-rank benefit, the extravagance of overlooking standing, rather than the requests for social equity by dalitbahujans who were requesting an open affirmation of such benefit.

Women and dalit considered the same

Untouchability and Dalit Ladies’ Oppression,” that “It remains a matter of reflection that the individuals who have been effectively required with arranging ladies experience troubles that are no place tended to in a hypothetical writing whose foundational standards are gotten from a sprinkling of standardizing hypotheses of rights, liberal political hypothesis, a not well educated left governmental issues and all the more as of late, every so often, even a well meaning convention of’entitlements.’ Malik in impact requests that how we are comprehend dalit ladies’ defenselessness.

Rank relations are implanted in dalit ladies’ significantly unequal access to assets of essential survival, for example, water and sanitation offices, and in addition to instructive foundations, open spots, and destinations of religious love. Then again, the material impoverishment of dalits and their political disappointment propagate the typical structures of untouchability, which legitimates upper-station sexual access to dalit ladies. Station relations are likewise changing, and new types of viciousness in autonomous India that objective images of dalit freedom such as the defilement of the statues of dalit pioneers, endeavor to counteract dalits’ socio-political progression by dispossessing land, or deny dalits of their political rights are gone for dalits’ apparent social versatility. These fresher types of brutality are regularly supplemented by the sexual harrassment and attack of dalit ladies, indicating the rank and gendered types of helplessness that dalit ladies experience.

As Gabriele Dietrich notes in her exposition “Dalit Movements and Women’s Movements,”* dalit ladies have been focuses of upper-position savagery. In the meantime, dalit ladies have likewise worked as the “property” of dalit men. Lowercaste men are likewise occupied with an unpredictable arrangement of dreams of requital that include the sexual infringement of upper-station ladies in striking back for their weakening by rank society. The risky organization of dalit ladies as sexual property in both occurrences overdetermines dalit ladies’ character in wording exclusively of their sexual accessibility.

Young ladies: Household Servants

At the point when a kid is conceived in most creating nations, companions and relatives shout congrats. A child implies protection. He will acquire his dad’s property and land a position to bolster the family. At the point when a young lady is conceived, the response is altogether different. A few ladies sob when they discover their infant is a young lady on the grounds that, to them, a girl is simply one more cost. Her place is in the home, not in the realm of men. In some parts of India, it’s conventional to welcome a family with an infant young lady by saying, “The worker of your family has been conceived.”

A young lady can’t resist the urge to feel second rate when everything around her advises her that she is worth not exactly a kid. Her character is fashioned when her family and society confine her chances and proclaim her to be inferior.

A blend of amazing neediness and profound inclinations against ladies makes a callous cycle of separation that keeps young ladies in creating nations from satisfying their maximum capacity. It additionally abandons them helpless against extreme physical and psychological mistreatment. These “hirelings of the family” come to acknowledge that life will never be any diverse.

Most prominent Obstacles Affecting Girls

Oppression young ladies and ladies in the creating scene is an overwhelming reality. It results in a huge number of individual tragedies, which signify lost potential for whole nations. Contemplates show there is an immediate connection between a nation’s disposition toward ladies and its encouraging socially and financially. The status of ladies is fundamental to the strength of a general public. On the off chance that one section endures, so does the entirety.

Grievously, female kids are most exposed against the injury of sexual orientation separation. The accompanying impediments are stark case of what young ladies overall face. However, the uplifting news is that new eras of young ladies speak to the most encouraging wellspring of progress for ladies’and men’in the creating scene today.

Endowment

In creating nations, the introduction of a young lady causes awesome change for poor families. At the point when there is scarcely enough nourishment to survive, any tyke puts a strain on a family’s assets. Be that as it may, the financial channel of a little girl feels considerably more serious, particularly in areas where endowment is drilled.

Endowment is merchandise and cash a lady of the hour’s family pays to the spouse’s family. Initially planned to help with marriage costs, share came to be seen as installment to the man of the hour’s family to take on the weight of another lady. In a few nations, endowments are indulgent, costing years of wages, and regularly tossing a lady’s family into obligation. The settlement hone makes the possibility of having a young lady considerably more offensive to poor families. It likewise puts young ladies in threat: another lady is helpless before her in-laws if they choose her settlement is too little. UNICEF gauges that around 5,000 Indian ladies are executed in settlement related occurrences every year.

Disregard

The creating scene is brimming with neediness stricken families who see their girls as a monetary problem. That state of mind has brought about the across the board disregard of child young ladies in Africa, Asia, and South America. In numerous groups, it’s a standard practice to breastfeed young ladies for a shorter time than young men so ladies can attempt to get pregnant again with a kid at the earliest opportunity. Subsequently, young ladies pass up a great opportunity for nurturing nourishment amid an essential window of their advancement, which hinder their development and debilitates their imperviousness to sickness.

Measurements demonstrate that the disregard proceeds as they grow up. Young ladies get less sustenance, medicinal services and less inoculations generally than young men. Very little changes as they get to be ladies. Convention calls for ladies to eat last, regularly decreased to picking over the scraps from the men and young men.

Child murder and Sex-Selective Abortion

In compelling cases, guardians settle on the terrible decision to end their infant young lady’s life. One lady named Lakshmi from Tamil Nadu, a ruined area of India, nourished her child sap from an oleander bramble blended with castor oil until the young lady seeped from the nose and kicked the bucket. “A little girl is dependably liabilities. By what method would I be able to raise a second?” said Lakshmi to disclose why she finished her child’s life. “Rather than her affliction the way I do, I thought it was ideal to dispose of her.”

Sex-specific premature births are much more regular than child murders in India. They are developing always visit as innovation makes it straightforward and shabby to decide an embryo’s sex. In Jaipur, a Western Indian city of 2 million individuals, 3,500 sex-decided premature births are completed each year. The sex proportion crosswise over India has dropped to an unnatural low of 927 females to 1,000 guys because of child murder and sex-based premature births.

China has its own particular long legacy of female child murder. In the most recent two decades, the administration’s notorious one-kid strategy has debilitated the nation’s reputation considerably more. By confining family unit size to restrict the populace, the approach gives guardians only one opportunity to create a desired child before being compelled to pay overwhelming fines for extra youngsters. In 1997, the World Health Organization proclaimed, “‘ more than 50 million ladies were evaluated to miss in China as a result of the standardized slaughtering and disregard of young ladies because of Beijing’s populace control program.” The Chinese government says that sex-specific premature birth is one noteworthy clarification for the amazing number of Chinese young ladies who have just vanished from the populace in the most recent 20 years.

Misuse

Indeed, even after outset, the risk of physical mischief takes after young ladies for the duration of their lives. Ladies in each general public are helpless against misuse. Be that as it may, the danger is more extreme for young ladies and ladies who live in social orders where ladies’ rights mean for all intents and purposes nothing. Moms who do not have their own particular rights have little assurance to offer their girls, a great deal less themselves, from male relatives and other power figures. The recurrence of assault and vicious assaults against ladies in the creating scene is disturbing. Forty-five percent of Ethiopian ladies say that they have been struck in their lifetimes. In 1998, 48 percent of Palestinian ladies confessed to being manhandled by a personal accomplice inside the previous year.

In some societies, the physical and mental injury of assault is aggravated by an extra shame. In societies that keep up strict sexual codes for ladies, if a lady ventures too far out’by picking her own significant other, being a tease in broad daylight, or looking for separation from an injurious accomplice’she has conveyed disrespect to her family and must be restrained. Regularly, teach implies execution. Families submit “honor killings” to rescue their notoriety polluted by defiant ladies.

Shockingly, this “insubordination” incorporates assault. In 1999, a 16-year-old rationally disabled young lady in Pakistan who had been assaulted was brought before her tribe’s legal guidance. Despite the fact that she was the casualty and her aggressor had been captured, the guidance chose she had conveyed disgrace to the tribe and requested her open execution. This case, which got a ton of reputation at the time, is not uncommon. Three ladies succumb to respect killings in Pakistan consistently’including casualties of assault. In zones of Asia, the Middle East, and even Europe, all obligation regarding sexual wrongdoing falls, as a matter of course, to ladies.

Work

For the young ladies who get away from these pitfalls and grow up moderately securely, day by day life is still unfathomably hard. School may be a possibility for a couple of years, however most young ladies are hauled out at age 9 or 10 when they’re sufficiently helpful to work throughout the day at home. Nine million a bigger number of young ladies than young men pass up a major opportunity for school each year, as indicated by UNICEF. While their siblings keep on going to classes or seek after their leisure activities and play, they join the ladies to do the main part of the housework.

Housework in creating nations comprises of persistent, troublesome physical work. A young lady is prone to work from before dawn until the light depletes away. She strolls unshod long separations a few times each day conveying overwhelming pails of water, undoubtedly contaminated, just to keep her family alive. She cleans, grinds corn, accumulates fuel, tends to the fields, washes her more youthful kin, and gets ready suppers until she takes a seat to her own after every one of the men in the family have eaten. Most families can’t manage the cost of current machines, so her undertakings must be finished by hand’squashing corn into dinner with substantial rocks, cleaning clothing against harsh stones, plying bread and cooking gruel over a rankling open flame. There is no time left in the day to figure out how to peruse and compose or to play with companions. She falls depleted every night, prepared to get up the following morning to begin another long workday.

The greater part of this work is performed without acknowledgment or prize. UN measurements demonstrate that despite the fact that ladies create a large portion of the world’s sustenance, they possess just 1 percent of its farmland. In most African and Asian nations, ladies’ work isn’t viewed as genuine work. Should a lady accept an occupation, she is relied upon to keep up every one of her obligations at home notwithstanding her new ones, with no additional assistance. Ladies’ work goes neglected, despite the fact that it is urgent to the survival of every family.

Sex Trafficking

A few families choose it’s more lucrative to send their girls to a close-by town or city to land positions that more often than not include hard work and little pay. That urgent requirement for money leaves young ladies simple prey to sex traffickers, especially in Southeast Asia, where universal tourism pigs out the illicit business. In Thailand, the sex exchange has swelled without register with a primary part of the national economy. Families in little towns along the Chinese fringe are consistently drawn nearer by scouts called “close relatives” who request their girls in return for a long time’s wages. Most Thai agriculturists win just $150 a year. The offer can be excessively enticing, making it impossible to can’t.

essay-2016-06-15-000BHg

Would it be moral to legalise Euthanasia in the UK?: essay help online

The word ‘morality’ seems to be used in both descriptive and normative meanings. More particularly, the term “morality” can be used either (Stanford Encyclopaedia of Philosophy https://plato.stanford.edu/entries/morality-definition

1. descriptively: referring to codes of conduct advocated by a society or a sub-group (e.g. a religion or social group), or adopted by an individual to justify their own beliefs,

or

2. normatively: describing codes of conduct that in specified conditions, should be accepted by all rational members of the group being considered.

Examination of ethical theories applied to Euthanasia

Thomas Aquinas’ natural law considered that morally beneficial actions and the goodness of those actions is assessed against eternal law as a reference point. Eternal law, in his view, is a higher authority and the process of reasoning defines the differences between right and wrong. Natural law thinking is not just concerned with focussed aspects, but considers the whole person and their infinite future. Aquinas would have linked this to God’s predetermined plan for that individual and heaven. The morality of Catholic belief is heavily influenced by natural law. Primary precepts should be considered when considering issues involving euthanasia particularly important key precepts to do good and oppose evil and to preserve life upholding the sanctity of life. Divine law set out in the Bible states that we are created in God’s image and held together by God from our time in the womb. The Catholic Church’s teachings on euthanasia maintain that euthanasia is wrong (Pastoral Constitution, Gaudium et Spes no. 27, 1965) as life is sacred and God-given. (Declaration on Euthanasia 1980). This view can be seen to be just as strongly held and applied today in the very recent case of Alfie Evans where papal intervention in the case was significant and public. Terminating life through euthanasia goes against divine law. Ending life and the possibility of that life bringing love into the world or love coming into the world in response to the person euthanised is wrong. To take a life by euthanasia, according to catholic belief, rejects God’s plan for that individual to live their life. Suicide or intentionally ending life is an equal wrong to murder and as such is to be considered rejection is God’s loving plan (Declaration on euthanasia, 1.3, 1980).

The Catholic Church interprets natural law to mean euthanasia is wrong and that those involved in it are committing a wrongful and sinful act. Whilst the objectives of euthanasia may appear to be good in that they seek to ease suffering and pain they are in fact failing to recognise the greater good of the sanctity of life within God is greater plan and include people other and the person suffering and eternal life in heaven

The conclusions of natural law consider the position of life in general and not just the ending of a single life. An example would be that if euthanasia is lawful older people could become fearful of admission to hospital in case they were drawn into euthanasia. It could also lead to people being attracted to euthanasia at times when they were depressed. This can be seen to attack the principles of living well together in society as good people could be hurt. It also makes some predictions on the slippery slope and floodgates type arguments about hypothetical situations. Euthanasia therefore clearly undermines some primary precepts.

Catholicism accepts the disproportionately onerous treatment is not appropriate towards the end of a person’s life and gives a moral obligation not to strenuously keep a person alive at all costs. An example of this would be the terminally ill cancer patient deciding not to accept further chemotherapy or radiotherapy which could extend their life, but at great cost to quality of that remaining life. Natural law does not seem to prevent them from making these kinds of choices.

There is a doctrine of double effect an example being palliative care with the relief of pain and distress as the objective might have a secondary effect of ending life earlier than if more active treatment options had been pursued. The motivation is not to kill, but rather to ease pain and distress. An example of this is when an individual doctor’s decision to increase opiate drug dosage to the point where respiratory arrest occurs almost inevitably but at all times the intended motivation is the easing of pain and distress. This has on various occasions been upheld as being legally and morally acceptable by the courts and medical watchdogs such as the GMC (General Medical Council).

The catechism of the Catholic Church accepts this and view such decisions as best made by the patient if competent and able and if not by those legally and professionally entitled to act for the individual concerned.

There are other circumstances when the person involved in the process might not be the same type of person as is assumed by natural law. For example, someone with severe brain damage and in a persistent coma or “brain-dead”. In these situations, they may not possess the defining characteristics of a person. This could form justification for euthanasia. The doctors or relatives caring for such a patient may have conflicts of conscience by being unable to show compassion to another and thereby prolong suffering, not only of the patient, but of those surrounding them.

In his book Morals and Medicine published in 1954, Fletcher, the president of the euthanasia Society of America argued that there were no absolute standards of morality in medical treatment and that good ethics demand consideration of patient’s condition and the situation surrounding it.

Fletcher Situation Ethics avoids legalistic consideration of moral decisions. It is anchored only actual situations and specifically in unconditional love for the care of others. When considering euthanasia with this approach it will always “depend upon the situation”.

From the view point of an absolutist, morality is innate from birth. It can be argued that natural law does not change as a result of personal opinions; remaining never changed. Natural law is a positive view with regard to morality as it can be seen to allow people from ranging backgrounds, classes and situations to have sustainable moral laws to follow.

Religious believers also follow the principles of Natural Law as the underlying theology of the law argues the idea that morality remains the same and never changes with an individual’s personal opinions or decisions. Christianity as a religion, has great support amongst its religious believers for there being a natural law of morality. Christian understanding behind this concept has been largely shown to have come as a result of Thomas Aquinas- following his teaching of the close connection of faith and reason being closely related arguments for there being a natural law of morality.

Natural Law has been shown over time to have compelling arguments, one of which being its all-inclusiveness and fixed stature- a contrast to the relative approach to morality. Natural law is objective and is consequently abiding and eternal. It is considered to be within us/innate and is seen to occur as a mixture of faith and reason to go on the form an intelligent and rational being who is faithful in belief of God. Natural law is a part of human nature, commencing from the beginning of our lives when we gain our sense of right and wrong.

However, there are also many disadvantages of natural law with regard to resolving moral problems. They can include, the fact that they are not always self-evident (proving). We are unable to confirm whether there is only one global purpose for humanity. It can be argued that even if humanity had a purpose for its existence, this purpose cannot be seen as self-evident. The perception of natural beings and things is forced to change over generations due to different perceptions, with forms of different times being more fitting with the present culture. It can therefore be argued that absolute morality is changed and altered by cultural beliefs of right and wrong. Some things later on in time being perceived as wrong, leading on to believe that defining what is natural is almost impossible as moral decisions are ever changing. The thought of actuality being better that potentiality, cannot easily transfer to practical ethics. The future holds many potential outcomes, however some of these potential outcomes are ‘wrong’. (Hodder Education, 2016)

Natural law being the best way to resolve moral problems holds a strong argument, however its strict formation means that there is some confusion as to what is right and wrong in certain situations. These views are instead formed by society- not always following the natural law of morality. Darwin’s Theory of Evolution put forward in On The Origin of the Species in 1859, challenged natural law as he put forward the notion that living things strive for survival (survival of the fittest) and supporting his theory of evolution by natural selection. It can be argued that moral problems being solved by natural law may be possible, but not necessarily the best solution.

For many years, euthanasia has been a controversial debate across the globe with different people taking opposing sides and arguing in support of their opinions. Ideally, it is the act of allowing an individual to die in a painless manner by suppressing their medication. Often, these are classified in different forms such as voluntary, involuntary and non-voluntary. However, the legal system has been actively involved in this debate. A major concern put forward is that legalizing any form of euthanasia may lead to slippery slope principle, which holds that permission of anything comparatively harmless today, may begin a trend that results in unacceptable practices. Although one of the popular stands argues voluntary euthanasia is morally acceptable while non-voluntary euthanasia is always wrong, the legal constitution has been split in their decisions in various instances. (Oxford for OCR Religious Studies, 2016)

Voluntary euthanasia is defined by the killing of an individual upon their approval through various ways. The arguments that voluntary euthanasia is morally acceptable are drawn from the expressed desires of a patient. As far as the respect for an individual’s decision does not harm other people, then it is morally correct. Since individuals have the right to make personal choices about their lives, their decisions on how they should die should also be respected. Most, importantly, at times, it remains the only option of assuring the well-being of the patient especially if they are suffering incessant and severe pain. Despite these claims, several cases have emerged, but the court has continued to refuse to uphold the morality of euthanasia irrespective of a victim’s consent. One of these is the case of Diane Pretty who suffered from motor neuron disease. Since she was afraid of dying by choking/aspiration, a common end of life event experienced by many motor neurone disease victims. She sought to have legal assurance that her husband would be free from the threat of prosecution if he assisted her to end her life. Her case went through the Court of Appeal, The House of Lords (the Supreme Court in today’s system) and the European Court of Human Rights. However, due to the concerns raised under the slippery slope principle, the judges denied her request, and she lost the case.

There have been many legal and legislative battles attempting to change the law to support voluntary Euthanasia in varying circumstances. Between 2002 and 2006 Lord Joel Joffe (a Patron of the Dignity in Dying organisation) fought to change the law in the UK to support assisted dying. His first Assisted Dying (Patient) Bill continued to the stage of a second reading (June 2003) however surpassed the time limit to progress to the committee stage. However, Joffe persisted and in 2004 restated his plight with the Assisted Dying for the Terminally Ill Bill which progressed further to the earlier bill to make it to the committee stage in 2006. The committee stated: “In the event that another bill of this nature should be introduced into Parliament, it should, following a formal Second Reading, be sent to a committee of the whole House for examination”. However, unfortunately in May 2006 an amendment at the Second reading lead to the collapse of the bill. This was a surprise to Joffe, with the majority of the select committee on board with the bill. In addition to this calls for a statute supporting voluntary euthanasia have increased and this can be evidenced by the significant numbers of people in recent years travelling to Switzerland where physician assisted suicide is legal under permitted circumstances. Lord Joffe expressed these thoughts in an article written for the campaign for Dignity In Dying cause in 2014 shortly before his death in 2017 in support of Lord Falconer’s Assisted Dying Bill which was a Bill which proposed to permit the “terminally ill, mentally competent adults to have an assisted death after being approved by doctors” (Falconer’s Assisted Dying Bill, Dignity in Dying, 2014). The journey of this bill was followed by the following referenced documentary.

The BBC documentary ‘How to Die: Simon’s Choice’ followed the decline of Simon Binner from motor neurone disease and his subsequent plight for an assisted death. The documentary followed his journey to Switzerland for a legal assisted death and documented the reactions of his surrounding family. During filming of the documentary, a legal bill was being debated in parliament proposing to legalise assisted dying in the United Kingdom. The bill proposed a new law (The Lord Falconers Assisted Dying Bill) which would allow a person to request a lethal injection if they had less that six months left to live, this raised a myriad of issues including precisely defining a life term whereby one has more or less that six months left to live. The Archbishop of Canterbury, Justin Welby urged MP’s to reject the bill stating that Britain would be crossing a ‘legal and ethical Rubicon’ if parliament were to vote to allow the terminally ill to actively be assisted to die at home in the UK under medical supervision. The leaders of the British Jewish, Muslim, Sikh and Christian religious communities wrote a joint open letter to all members of the British parliament urging them to oppose the bill to legalise assisted dying. (The Guardian, 2015). After announcing his death on LinkedIn, Simon Binner died at an assisted dying clinic in Switzerland. The passing of this bill may have been the only way of helping Simon Binner in his home country, although assisted dying was ruled to be unlawful. (Deacon, 2016)

The result of the private members bill, originally proposed by Rob Marris (a Labour MP from Wolverhampton) ended in defeat in 330 MPs against and 118 MPs in favour. (The Financial Times, 2015)

The 1961 Suicide Act (Legislation, 1961) decriminalised suicide, however it didn’t make it morally licit. It outlines that a person who aids, abets, counsels or procures suicide of another/attempt by another to commit suicide shall be liable to be sentenced to a prison term of up to 14 years. It also provided for the situation of a defendant on trial on indictment for murder/manslaughter it is proved that the accused aided, abetted, counselled or procured the suicide of the person in question, the jury could find them guilty of that offence as an alternative verdict.

Many took that the view that the law supports principle of autonomy, but the act was used to reinforce the sanctity of life principle by criminalising any form of assisted suicide. Although the act doesn’t hold the position that all life is equally valuable, there have been cases when allowing a person to die would be the better solution.

In the case of non-voluntary euthanasia, patients are often incapable of giving their approval for death to be induced. It mostly occurs if a patient is either very young, mentally retarded, has an extreme brain damage, or is in a coma. Opponents argue that human life should be respected and in this case, it is even worse because the victim’s wishes are not factored when making decisions to end their life. As a result, it becomes morally wrong irrespective of the conditions that they face. In such a case, all parties involved should wait for a natural death while at the same time according the patient the best palliative medical attention possible. The case of Terri Schiavo who was suffering from bulimia and with an extremely damaged brain falls under this argument. The ruling of the court allowing the request of her husband to have her life terminated triggered heated debates with some arguing that it was wrong while others saw it as a relief since she had spent more than half of her life unresponsive.

I completed primary research in order to support my findings as to whether it would be moral or not to legalise Euthanasia in the UK. With regard to the having an understanding of the correct definition of Euthanasia nine out of ten people who took part in the questionnaire selected the correct definition of physician-assisted suicide being “The voluntary termination of one’s life by administration of a lethal substance with the direct or indirect assistance of a physician” (Medicanet, 2017). The one person who selected the wrong definition believed it to be “The involuntary termination of one’s own life by administration of a lethal substance with the direct or indirect assistance of a physician. The third definition on the questionnaire stated that physician assisted suicide was “The voluntary termination of one’s own life by committing suicide without the help of others”- this definition is the ‘obvious’ incorrect answer and no participant in the questionnaire selected this answer.

The morality of the young should be followed. From the results of my primary research completed by a selected youth audience seventy percent were in agreement that people should have the right to choose when they die. However only twenty percent of this targeted audience were in agreement that they would assist a friend or family member in helping them die. This drop in support can be supported by the fear that prosecution brings of a possible fourteen year imprisonment for assisting in a person’s death.

The effect of the Debbie Purdy case (2009), was that guidelines were established by the Director of Public Prosecutions in England and Wales (Dying or assisted dying isn’t illegal in Scotland however there is no legal way to medically access it). These guidelines were established according to the Director of Public Prosecutions to “clarify what his position is as to the factors that he regards as relevant for and against prosecution” (DID Prosecution Policy, 2010). The guidance policy outlines ‘more likely’ factors as to when prosecution should take place; for prosecution of an assistor the policy outlined that if they had a history of violent behaviour, didn’t know the person, received a financial gain from the act or acted as a medical professional then they were more likely to face prosecution. However despite these factors the policy stated that police and prosecutors of the case should examine any financial gain with a ‘common sense’ approach as many financially benefit from the loss of a loved one, however the fact that they were a close relative being relieved of pain for example should be a larger factor behind assisting someone to die, to be considered in case of prosecution.

Arguments that state voluntary euthanasia is morally right while involuntary euthanasia is wrong, remains as being one of the most controversial issues even in the modern society. It is even more significant because even the legal systems remain split in their ruling in the various cases such as those cited. Based on the slippery slope argument, care should be taken when determining what is morally right and wrong because of the sanctity of human life. Many consider that the law has led to considerable confusion and that one way of developing the present situation is to create a new Act which permitting physician assisted dying, with the proposal stating that there should be a bill to “enable a competent adult who is suffering unbearably as a result of a terminal illness to receive medical assistance to die at his own considered/persistent request… to make provision for a person suffering from a terminal illness to receive pain relief medication” (Assisted Dying for the Terminally ill Bill, 2004).

There is a major moral objection to voluntary euthanasia under the reasoning of the “slippery slope” argument: the fear that what begins as legitimate reasons to assist in a person’s death will also permit death in other illegal circumstances.

In a Letter addressed to The Times newspaper (24/8/04), John Haldane and Alasdair MacIntyre along with other academics, lawyers and philosophers, suggested that any supporters of the Bill change from making the condition one of actual unbearable suffering from terminal illness to merely the fear, discomfort and loss of dignity which terminal illness might bring. In addition, there is an issue of if quality of life is grounds for euthanasia from those who request it therefore it must be open to those who don’t request it or are unable to request it therefore presenting the issue of a slippery slope. Also in the letter addressed to The Times, the esteemed academics referenced Euthanasia in the Netherlands where it is legal. The purpose of this was to infer that many people have dies against their desire due to safeguarding issues. (Hodder Education, 2016)

In conclusion, upon considering different morality arguments on both sides of the debate, I concluded that the two forms of morality (Natural Law and Situation Ethics) would give two opposing responses in relation to the question.

From the viewpoint of a deontologist who would be guided by natural law, duty and obligation arguably from a religion would lead a society to decide that it would be wrong to legalise Euthanasia. However, from the viewpoint of a situational ethicist whose viewpoint would be changeable depending on the independent situation could support the plight to legalise Voluntary Euthanasia in the UK under guidelines to account for differing situations.

After completing my Primary and Secondary Research, considering the passage of many unsuccessful bills put through parliament to legalise euthanasia and many case studies including the moving account of Simon Binner’s fight to die, my own view rests on the side of a situational ethicist who would believe that depending to the independent situation people should be able to have the right to die in their own country by legalising voluntary euthanasia, rather than being forced to travel abroad to access a legal form of voluntary euthanasia and risk their loved ones being prosecuted on their return to the UK for assisting them.

The slippery slope argument does not help those in particular individual situations and it must surely be wrong to shy away from making difficult decisions on the grounds that an individual should sustain prolonged suffering in order to protect society from the possible extended over use of any legalisation. In practice over the past half century some sort of euthanasia has been going on in the UK when doctors give obvious over-dosage of opiates in terminal cases, but have been shielded from the legal consequences by an almost fictional notion that as long as the motivation was to ease and control pain then the inevitable consequence of respiratory arrest (respiratory suppression is a side effect of morphine type drugs), then the action was lawful.

Discredited and now defunct Liverpool Care Pathway for the Dying Patient (LCP) was an administrative tool used as an attempt to assist UK healthcare professionals to manage the care pathway and deciding palliative care options for patients at the very end of life. As with many such tick the-box-exercises individual discretion is restricted in an attempt to standardise practice nationally (Wales was excluded from the LPA). The biggest problem with the LPA (which attracted much adverse media attention and public concern in 2012) was that most patients or their families were not consulted when they were placed on the pathway. It had options for withdrawing active treatment whilst managing distressing symptoms actively. However, removing intravenous hydration/feeding by regarding it as active treatment would inevitably lead to death in a relatively short period of time making the decision to place a patient on the LPA because they were at the end of life a self-fulfilling prophesy. (Liverpool Care Pathway)

There is a chilling consideration of cost of provision of “just in case” boxes at approximately £25 in the last part of this lengthy document should be part of the process of considering what to advise professionals may seem alarming to some. However there is a moral factor in the financial implications of unnecessarily prolonging human life. Should the greater good be considered when deciding to actively permit formal pathways to euthanasia or to take steps to prohibit it (the crimes of murder or assisting suicide). In the recent highly publicised case of Alfie Evans enormous financial resources were used to keep a child with a terminal degenerative neurological disease alive on a paediatric intensive care unit at Alder Hay hospital in Liverpool for around a year. In deciding to do this it is inevitable that those resources were unavailable to treat others who might have gone on to survive and live a life. Huge sums of money were spent both on medical resources and lawyers. The case became a highly media publicised circus resulting in ugly threats made against medical staff at the hospital concerned. There was international intervention in the case by the Vatican and Italy (granting of Italian nationality to the child). Whist the emotional turmoil of the parents was tragic and the case very sad was it moral that their own beliefs and lack of understanding of the medical issues involved should lead to such a diversion of resources and such terrible effects on those caring for the boy?

(NICE (National Institute of Clinical Excellence) guidelines, 2015)

The General Medical Council (GMC) governs the licensing and professional conduct of doctors in the UK. They have produced guidance for doctors regarding the medical role at the end of life Treatment and care towards the end of life: good practice in decision making. It gives comprehensive advice on some of the fundamental issues dealing with the end of life treatment and it covers issues such as living wills (where withdrawal of treatment requests can be set out in writing and in advance). These are binding both professionally, but as ever there are some caveats regarding withdrawal of life prolonging treatment.

It also sets out presumptions of a duty to prolong life and of a patient’s capacity to make decisions along established legal and ethical viewpoints. I particular it is stated that “decisions concerning life prolonging treatments must not be motivated by a desire to bring about a patient’s death” (Good Medical Practice, GMC Guidance to Doctors, 2014)

Formally the Hippocratic Oath was sworn by all doctors and set out a sound basis for moral decision making and professional conduct. In modern translation from the original ancient Greek it states with regard to medical treatment that a doctor should never treat “….. with a view to injury and wrong-doing. Neither will [a doctor] administer a poison to anybody when asked to do so, nor will [a doctor] suggest such a course. Doctors in the UK do not swear the oath today, but most of its principles are internationally accepted except perhaps in the controversial areas surrounding abortion and end of life care.

(Hippocratic Oath, Medicanet)

In conclusion, upon considering different morality arguments on both sides of the debate, I concluded that the two forms of morality (Natural Law and Situation Ethics) would give two opposing responses in relation to the question.

From the viewpoint of a deontologist who would be guided by natural law, duty and obligation arguably from a religion would lead a society to decide that it would be wrong to legalise Euthanasia. However, from the viewpoint of a situational ethicist whose viewpoint would be changeable depending on the independent situation could support the plight to legalise Voluntary Euthanasia in the UK under guidelines to account for differing situations.

After completing my Primary and Secondary Research, considering the passage of many unsuccessful bills put through parliament to legalise euthanasia and many case studies including the moving account of Simon Binner’s fight to die, my own view rests on the side of a situational ethicist who would believe that depending to the independent situation people should be able to have the right to die in their own country by legalising voluntary euthanasia, rather than being forced to travel abroad to access a legal form of voluntary euthanasia and risk their loved ones being prosecuted on their return to the UK for assisting them.

At the end of the day, much of the management of the end of life of patients is not determined by the stipulations laid out by committees in lengthy documents, but by the individual treatment decisions made by individual doctors and nurses who are almost always acting in the best interests of patients and their families. The methodology of accelerating the inevitable event by medication or withdrawal of treatment is almost impossible to standardise across a hospital or local community care setup, let alone a country. It may be a better way to continue the practice of centuries and let the morality and conscience of the treating professions determine what happens and keep the formal moral, religious and legal factors involved in such areas in the shadows.

2018-5-4-1525394652

Has the cost of R & D impacted vaccine development for Covid-19?

Introduction

This report will be investigating and trying to answer the question of: ‘To what extent have the cost requirements of R&D, structure of the industry and government subsidy affected firms in the pharmaceutical industry in developing vaccines for Covid-19?’. The past two years have been very unpredictable for the pharmaceutical industry regarding the breakout of the COVID-19 pandemic. Despite the fact that the pharmaceutical industry has made major contributions to human wellbeing with regards to the reduction of suffering and ill health for over a century, the industry still remains one of the least trusted industries based on public opinions. It is even often compared to the nuclear industry in terms of trustworthiness. Despite being one of the riskiest industries to invest money into, governments have subsidised billions into the production of the COVID-19 vaccines. Regardless of the fact of the associated risks that come with pharmaceuticals, a big part of the public still thinks pharmaceuticals should continue to be produced and developed in order to provide the correct treatment to those with existing health issues (Taylor, 2015). This along with further aspects affecting the requirements of R&D, structure of the industry and government subsidy and how these have affected firms in the pharmaceutical industry with regards to the development of the COVID-19 vaccines will be discussed further in the report.

The Costs of R&D

Back in 2019, $83 billion was spent on R&D. That figure alone is roughly 10 times greater than what the industry spent on R&D in the 1980s. Most of this amount was dedicated to testing and discovering new drugs and clinical testing with regards to safety of the drug. In 2019 drug companies dedicated a quarter of their annual income to R&D which is also an increase of almost double since the early 2000s.

(Pharmaceutical R&D Expenditure Shows Significant Growth, 2019)

Usually the amount spent on R&D of a new drug by drug companies is based on the financial return they expect to make, any policies influencing the supply and demand for drugs and the cost of developing these drugs.

Most drugs that have been approved recently have been specialty drugs. These are drugs that typically treat issues such as complex, chronic or rare conditions and can require patient monitoring. However, specialty drugs are very expensive to develop, pricey for the customer and hard to remake (Research and Development in the Pharmaceutical Industry, 2021).

Government subsidies for the COVID-19 vaccines

There are two main ways in which a federal government can have a direct impact in supporting vaccine development. This is either done by making a promise to purchase a successful vaccine in advance once the firm has successfully achieved its specified goal with the vaccine, or they can cover any costs associated with the R&D of the vaccine.

(Which Companies Received The Most Covid-19 Vaccine R&D Funding?, 2021)

The Department of Health and Human Services in the month of May 2020, launched ‘Operation Warp Speed’. This was a collaborative project in which the FDA, the Department of Defence, the National Institutes of Health and the Centre for Disease Control and Prevention all worked together to provide funding for the COVID-19 vaccine development. Through ‘Operation Warp Speed’, more than $19 billion was provided in funding by the federal government to help seven different private pharmaceutical manufacturers in the development and research of COVID-19 vaccines. A further five out of seven of those went on to accept further funding in order to help these companies boost the production capabilities of the vaccines. Later a sixth company accepted funding in order to help boost the production of another company’s vaccines as they received authorization for emergency use. Then six of the seven also made a deal for an advance purchase. Two of these companies received additional funding as they sold more doses than they expected to during the advance purchase agreements, in order for them to develop even more vaccines to distribute. Due to the simultaneous execution of combining numerous stages of development that in normal cases would be developed in consecutive order, it allowed pharmaceutical manufacturers to reach their end goal and manufacture vaccines at a rate a lot higher than normal when it comes to vaccines. This was done due to the urgency of a solution to the COVID-19 pandemic, as it was starting to cause public uproar and panic amongst nations. As soon as the first COVID-19 diagnoses was made in the US, two vaccines were already at Phase III clinical trials, and this is immensely quick, as it would usually take around a few years of research in order to reach Phase III in clinical trials for a vaccine. The World Health Organisation claims that there were already over 200 COVID-19 vaccine development candidates in the time period of February 2021 (Research and Development in the Pharmaceutical Industry, 2021).

(Research and Development in the Pharmaceutical Industry, 2021)

The image above shows what vaccines were at which stage of development during what time period. This shows the urgency that was there in order to develop and produce these vaccines to fight the outbreak of the coronavirus. Without these government subsidies, firms would have been nowhere near completing the research and development needed in order to produce numerous COVID-19 vaccines. This shows the importance that government subsidies have on the pharmaceutical industry and the development of new drugs and vaccines.

Impact of the structure of the pharmaceutical industry on vaccine development

When it came to the development of the COVID-19 vaccines, many different names in the pharmaceutical industry took part. Now as far as the majority of society is concerned, the pharmaceutical industry is just a small group of large multinational corporations such as GlaxoSmithKline, Novartis, AstraZeneca, Pfizer and Roche. These are frowned upon by the public as they are stereotyped to be the ‘Big Pharma’ and so they can be misleading. Many people have their if’s and doubts about these big multinational corporations especially when they have such an influence on their health and the drugs they develop. It becomes hard for the public to rely and trust these companies because at the end of the day it is their health that they are trusting these companies with. So therefore it is logical that a lot of people will have had and still do have their suspicions about the COVID-19 vaccines developed by a handful of these companies. If you were to ask someone whether or not they have ever heard of companies like Mylan or Teva, they would probably have no clue about them even though Teva is the world’s 11th biggest pharmaceutical company and probably produces the medicine that these people take on a regular basis. The fact that over 90% of pharmaceutical companies are basically almost invisible to the general public obviously means that when it does become known to the public who has manufactured a medicine they are considering taking, for example the Pfizer vaccine, people are going to be careful and suspicious about taking this vaccine as they have probably never heard of the company Pfizer before. All this, despite it being that these companies are responsible for producing a majority of the medicines that everyone takes.

Most new drugs that are produced never even make it onto the market as the drug is found to not work or to have serious side effects, making it unethical to use on patients. However, the small percentage of drugs that do make it onto the market are patented, meaning that the original manufacturer only holds temporary rights to sell the product. Once this has expired, the pharmaceutical is free to sell and manufacture by anyone, meaning it is now a generic pharmaceutical (Taylor, 2015).

This again does not help research pharmaceutical companies, as their developments which are now out of patent, are just being sold by generic pharmaceutical companies where everyone goes to buy their pharmaceuticals. This means generic pharmaceutical companies basically never have a failed product and the research companies are barely able to create a successful product to make it out onto the market. This again causes the public to not even know that the majority of drugs they buy come from these research companies and are not originally procured by the generic pharmaceutical company they buy them from.

As seen with the COVID-19 vaccines, this caused a lot of uncertainty and distress amongst the public as most people had never even heard of companies like ‘Pfizer’ or ‘AstraZeneca’. This in turn made it more difficult for pharmaceutical companies to successfully manufacture and sell their vaccine, prolonging the whole vaccination process.

Due to this structure of the pharmaceutical industry, it has affected firms greatly in their ability to successfully and reliably manufacture vaccines against COVID-19.

Conclusion

Looking at the three factors combined: cost requirements of R&D, structure of the industry and government subsidy, it is clear that these have all had a great impact in the development of the COVID-19 vaccines. The costs associated with R&D in the development of the COVID-19 vaccines, essentially determines how successful the vaccines would be and whether or not they would have enough to first of all do the needed research and then to finally produce and sell them. Without the large number of costs that go into the development of vaccines and other drugs, the COVID-19 vaccines will have never been able to be manufactured and sold. This will have left the world in even more panic and uproar than it was/is. If this would’ve happened, it can easily have a ripple effect on economies, social factors and maybe even potentially other factors such as environmental factors.

One of the biggest impacts on the successful manufacturing and sale of the vaccines was to do with the structure of the industry. With big research pharmaceutical companies putting in all the work and effort to develop these COVID-19 vaccines but with most of the general public not ever even having heard of them before, it made it very hard for pharmaceutical companies to come across as reliable. People didn’t trust the vaccines as they had never heard of the company who developed it, such as Pfizer. This caused debate and protest against these vaccines, making it harder for companies to produce and successfully sell their vaccines to the public who were in need of them and demanded them. This was due to one major flaw in the pharmaceutical industry, which is the fact that companies such as Pfizer and AstraZeneca are kept under the rug and are barely even known by the public as all their products are just taken and sold on by generic pharmaceutical companies where people can buy them from. It also has to do with the fact that research pharmaceutical companies specialise in advanced drugs and not in more generic drugs which are more likely to be successful as they are easier to develop. So naturally the lack of successful products produced will reflect negatively on these companies although the one product they do successfully produce will also be frowned upon due to its previously non viable products.

Then finally, probably the second or joint most important factor is government subsidies. It is quite clear that without the correct government funding and without ‘Operation Warp Speed’ we’d still be in the process of trying to develop even the first COVID-19 vaccine as there will have been nowhere near enough funding for the R&D of the vaccines. This would’ve resulted in the death rate of coronavirus infections to spike, and will have probably put the economy on a complete standstill putting a large number of people out of work. All of this has numerous ripple effects, as just the one issue of loss of work could spike the poverty rate immensely, leaving economies broken. So overall, these three factors have had a huge impact on firms in the pharmaceutical industry in developing the COVID-19 vaccines.

2022-1-5-1641412725

Gender in Design: essay help free

Gender has always had a dominant place in design. Kirkham and Attfield in their 1996 book, The Gendered Object, set out that in their view that there are attributable genders which seem to be unconsciously attached to some objects as the norm. Making the distinction between how gender is viewed in modern day design compared to twenty plus years ago is now radically different in that there is now recognition of this normalization. Having international companies recognise this change and adapt their brands and companies to relate to this modern day approach influences designers like myself to keep up to date and affect my own work.

When designing there is Gender system some people tend to follow very strictly, the system is a guide that works with values that reveals the gender formation in mankind. In the gender system you have binary opposition which takes action in colour, size, feeling and shape, for example pink/blue, small/large, smooth/rough and organic/geometric. Without even thinking the words give off synonyms of male or female without even putting them in context. Gender’s definition is traditionally Male or Female but modern day brands are challenging and pushing these established boundaries. They don’t think they should be restrictive or prescriptive as they have been in the past. Kirkham and Attfield challenge this by comparing perceptions in the early twentieth century illustrating that the societal norms were the opposite to what we are now made to believe by gender norms. A good example of this is the crude binary opposition implicit in ‘pink for a little girl and blue for a boy’ was only established in the 1930’s; babies and parents managed perfectly well without such colour coding before then. Today through marketing and product targeting these ‘definitions’ are even more widely used in the design and marketing of children clothes and objects than a few years ago. Importantly, such binary oppositions also influence those who purchase objects, and, in this case, facilitate the pleasures of many adults take in seeing small humans visibly marked as gendered beings. This is now being further challenged by the demands for non-binary identification.

This initial point made by Kirkham and Attfield in 1996 is still valid. Even though the designers and brands are in essence guilty of forms of discrimination by falling in line with using the established gender norms, they do it because it’s what their consumers want and how they see development of business and creation of profit, because these stereotypical ‘Norms’ are seen to be Normal, acceptable and sub-consciously recognisable. “Thus we sometimes fail to appreciate the effects that particular notions of femininity and masculinity have on the conception, design, advertising, purchase, giving and uses of objects, as well as on their critical and popular reception”. (Kirkham and Attfield. 1996. The Gendered Object, p. 1).

With the help of the product language, gendered toys and clothes appear from an early age. The products are sorted as being ‘for girls’ and ‘for boys’ in the store as identified by Ehrnberger, Rasanen, Ilstedt, in 2012 in the article ‘Visualising Gender Norms in Design. International Journal of Design’. Product language is mostly used in the branding aspect of design, how a product or object is portrayed, it’s not only what the written language says. Product language relates to how the object is being showcased and portrayed through colours, shapes and patterns. A modern example of this is the branding for a Yorkie chocolate bar. Their slogan was publicly known as being gender bias towards mens. ‘Not for girls’, there is no hiding the fact that the language the company are using is being targeted at men because they are promoting a brand that is strong, chunky and ‘hard’ in an unsophisticated way which all have connotations of being ‘male’ and actually arguably as ‘alpha male’ to make it more attractive to men. Their chosen colours also suggest this with using navy blue, dark purple, yellow and red which are bold and is typically a ‘male’ generated pallette. Another example would be the advertisement of tissues. Tissues no matter where you buy them do the exact same thing irrespective of gender so why are some tissues being targeted at woman and some at men, could it be that this gender targeting be avoiding neutrality helps sell more tissues.

Product Language is very gender specific when it comes to clothing brands and toys for kids. “Girls should wear princess dresses, play with dolls and toy housework products, while boys should wear dark clothes with prints of skulls or dinosaurs, and should play with war toys and construction kits”. (Ehrnberger, Rasanen, Ilstedt, 2012. Visualising Gender Norms in Design. International Journal of Design). When branding things for children having the separation between girl and boy is extremely common, using language like ‘action’ which has male connotations or ‘princess’ which has female connotations appeals to the consumer because they are relatable words to them and to their children as well. In modern society most people find it difficult not to identify blue for boys and pink for girls especially from newborns. If you were to walk into any department store/ toy store or any store that caters to children you will see the separation between genders no matter if it is clothes to toys or anything in between. The separation is so obvious through the colour branding used. Girl side, pink, yellow, lilac are used, soft bright happy colours being used on toy babies and dolls to hats and scarfs. Conversely on the boys side blue, green and black, bold, dark, more primary colours being used for trucks to a pair of trousers.

Some companies have begun to notice how detrimental the separation is developing into and how it could possibly create a hold in advancing and opening up our society, example being John Lewis Partnership.

John Lewis is a massive department store, that has been in business for nearly fifty years. In 2017 they decided to scrap the girls section and boys sections for the clothing range in their store, and name it ‘Childs wear’ a gender neutral name. Allowing them to design clothing that allows children to wear whatever they want without being told ‘no, that is a boys top you can’t wear that because you’re a girl’ or vice versa. Caroline Bettis, head of children’s wear at John Lewis, said: “We do not want to reinforce gender stereotypes within our John Lewis collections and instead want to provide greater choice and variety to our customers, so that the parent or child can choose what they would like to wear”. Possibly the only issue with this stance is the price point, John Lewis is typically known for being a higher priced, high street store which means it isn’t accessible for everyone to shop there. Campaign group Let Clothes be Clothes commented on this “Higher-end, independent clothing retailers have been more pro-active at creating gender-neutral collections, but we hope unisex ranges will filter down to all price points. We still see many of the supermarkets, for example, using stereotypical slogans on their clothing,” (http://www.telegraph.co.uk/news/2017/09/02/john-lewis-removes-boys-girls-labels-childrens-clothes/).

Having a very well-known brand make this move should only enforce, encourage and inspire others to join in with the development. This change is a bold way of using Product language, even though it’s not for just one specific thing its advertising and marketing as well, meaning it is a whole rebrand of company, by not using gender specific words it takes away the automatic stereotypes you get when buying anything for children.

Equality is the state of being equal, be it in status, rights or opportunities, so when it comes to design why does this attribute get forgotten about. This isn’t a feminist rant, gender equality is affected in both male and females in the design world, when designing, everything should be equal and fair to both sexes. “Gender equality and equity in design is often highlighted, but it often results in producing designs that highlight the differences between men and women, although both the needs and characteristics vary more between individuals than between genders” (Hyde 2005). Hyde’s point is still contemporary and relevant, having gender equality in design is very important, but gender isn’t the sole issue, things can be designed for a specific gender but even if you are female you might not relate to the gender specific clothes for your sex. Design is to make and create something for someone or thing, not just gender. “Post- feminism argues that in an increasingly fragmented and diverse world, defining one’s identity as male or female is irrelevant, and can be detrimental”. (https://www.cl.cam.ac.uk/events/experiencingcriticaltheory/Satchell-WomenArePeople.pdf).

Recently many more up and coming independent brands and companies have been launching Unisex clothing brands for a multiple of years, most have been doing it and pushing the movement well before the topic of gender equality in design got into mainstream media as an issue. One company pushing out gender norms is Toogood London and another is GFW, Gender Free World. Gender Free World is a company that was created by a group of people who all think on the same wavelength when it comes to equality in gender. In fact their ‘Mission Statement’ sets this out as a core ethos (which incidently is obviously an influence on John Lewis when you look at the transferability of the phraseology) “GFW Clothing was founded in 2015 (part of Gender Free World Ltd) by a consortium of like-minded individuals who passionately believe that what we have in our pants has disproportionately restricted the access to choice of clothing on the high street and online.” https://www.genderfreeworld.com/pages/about-g. Lisa Honan is the cofounder of GFW, her main reason for starting a company like this was through ‘sheer frustration’ due to the lack of options for her taste and style on the market, with this she has shopped in male and female departments but never found anything fitted either especially if she was going for a male piece of clothing. During an interview with Honan by Saner she commented that the men’s shirts didn’t fit her because she had a woman’s body and iIt got her thinking, ‘ why is there a man’s aisle and a woman’s aisle, and why do you have to make that choice?’. She saw that you’re not able to make many purchases without being forced to define your own gender and this is reinforcing the separation between genders in fashion, if she feels this way many others must too, and they do or there wouldn’t be such a potential big business opportunity for it.

In my design practice of Communication Design, gender plays a huge role. Be it from colour choices, to certain typefaces being used, most work Communication designers need to create and produce, will either be to represent a brand or to actually brand a company, so when choosing options, potential gender stereotyping should come into consideration. The points mentioned above, showing how using the gender system, product language, gender norms and having equality and equity in design, reinforces graphic designers in a cautionary manner not to not fall down any pit holes when designing.

Designing doesn’t mean simply male or female, designing means to create and produce ‘something’ for ‘someone’ no matter their identifiable or chosen gender. If they are a company producing products targeted specifically at men and after a robust design concept examination I felt that using blue would enhance their brand and awareness to their target demographic then blue would be used, in just the same way using pink for them if it works for the customer, then put simply it works.

To conclude, exploring the key points of gender in the design world, only showcases the many issues there are.

2017-12-11-1513023430

The stigma surrounding mental illness: essay help free

Mental illness is defined as a health problem resulting from complex interactions between an individual’s mind, body and environment which can significantly affect their behavior, actions and thought processes. A variety of mental illnesses exist, impacting the body and mind differently, whilst affecting the individual’s mental, social and physical wellbeing to varying degrees. A range of psychological treatments have been developed in order to assist people living with mental illness, however social stigma can prevent individuals from successfully engaging with these treatments. Social or public stigma is characterized by discriminatory behavior and prejudicial attitudes towards people with mental health problems resulting from the psychiatric label they possess (Link, Cullen, Struening & Shrout, 1989). The stigma surrounding labelling oneself with a mental illness causes individuals to hesitate in regards to seeking help as well as resistance to treatment options. Stigma and its effects can vary depending on demographic factors including age, gender, occupation and community. There are many strategies in place to attempt to reduce stigma levels which focus on educating people and changing their attitudes towards mental health.

Prejudice, discrimination and ignorance surrounding mental illnesses results in a public stigma which has a variety of negative social effects towards individuals with mental health problems (Thornicroft et al 2007). An understanding of how stigma can be gained through the Attribution Model which identifies four steps involved in the formation of a stigma (Link & Phelan, 2001). The first step in the formation of a stigma is ‘labelling’, whereby key traits are recognized as portraying a significant difference. The next step is ‘stereotyping’ whereby these differences are defined as undesirable characteristics followed by ‘Separating’ which makes a distinction between ‘normal’ people versus the stereotyped group. Stereotypes surrounding mental illnesses have been developing for centuries, with early beliefs being that individuals suffering from mental health problems were possessed by demons or spirits. ‘Explanations’ such as these, promoted discrimination within the community, preventing individuals from admitting any mental health problems due to a fear of retribution (Swanson, Holzer, Ganju & Jono, 1990). The final step in the Attribution model described by Link and Phelan is ‘Status Loss’ which leads to the devaluing and rejection of individuals in the labelled group (Link & Phelan, 2001). An individual’s desire to avoid the implications of public stigma causes them to avoid or drop out of treatment for fear of being associated with negative stereotypes (Corrigan, Druss and Perlick, 2001). One of the main stereotypes surrounding mental illness, especially depression, and Post Traumatic Stress Disorder is that people with these illnesses are dangerous and unpredictable (Wang & Lai, 2008). Wang and Lai carried out a survey whereby 45% of participants considered people with depression as dangerous, however these results maybe subject to some reporting bias, yet a general inference can be made. Another survey found that a large proportion of people also confirmed that they were less likely to employ someone with mental health problems (Reavley & Jorm, 2011). This study highlights how public stigma can affect employment opportunities, consequently creating a greater barrier for anyone who would benefit from seeking treatment.

Certain types of stigma are unique and consequently more severe to certain groups within society. Approximately 22 soldiers or veterans commit suicide every day in the United States due to Post Traumatic Stress Disorder (PTSD) and depression. A study was performed surveying soldiers and found that out of all the people who met the criteria for a mental illness, only 38% would be interested in receiving help and only 23-30% actually ended up receiving professional help (Hoge et al, 2004). There is an enormous stigma surrounding mental illness within the military, due to their high values in mental fortitude, strength, endurance and self sufficiency (Staff, 2004). A soldier who admits to having mental health problems is deemed as not adhering to these values thus appearing weak or dependent, therefore placing a greater pressure on the individual to deny or hide any mental illness. Another contributor to soldiers avoiding treatment is a fear of social exclusion as it is common in military culture for some personnel to socially distance themselves from soldiers with mental health problems (Britt et al, 2007). This exclusion is due to the stereotype that mental health problems make a soldier unreliable, dangerous and unstable. Surprisingly, individuals with mental health problems who seek treatment are deemed more emotionally unstable than those who do not, thus the stigma surrounding therapy creates a barrier for individuals to start or continue their treatment (Porath, 2002). Furthermore, soldiers are also faced with the fear that seeking treatment will negatively affect their career, both in and out of the military, with 46 percent of employers considering PTSD as an obstacle when hiring veterans in a 2010 survey (Ousley, 2012). The stigma associated with mental illness in the military is extremely detrimental to the soldiers’ wellbeing as it prevents them from seeking or successfully engaging in the treatment for mental illnesses which have tragic consequences.

Adolescents and young adults with mental illness have the lowest rate for seeking professional help and treatment, despite the high occurrence of mental health problems. (Rickwood, Deane & Wilson, 2007). Adolescents’ lack of willingness to seek help and treatment for mental health problems is catalyzed by the anticipation of negative responses from family, friends and school staff. (Chandra & Minkovitz, 2006). A Queensland study of people aged 15–24 years showed that 39% of the males and 22% of the females reported that they would not request help for emotional or distressing problems (Donald, Dower, Lucke & Raphael, 2000). A 2010 survey of adolescents with mental health problems found that 46% described experiencing feelings of distrust, avoidance, pity and prejudice from family members. This portrays how negative family responses and attitudes impact an individual by creating a significant barrier to seeking help (Moses, 2010). Similarly, a study on adolescent depression also noted that teenagers who felt more stigmatized, particularly within the family, were less likely to seek treatment (Meredith et al., 2009). Furthermore, adolescents with unsupportive parents would struggle to pay expenses for treatment and transportation, further preventing successful treatment of the illness. Unfortunately, the generation of stigma is not unique to just family members, adolescents also report having felt discriminated by peers and even school staff (Moses, 2010). The main step to seeking help and engaging in treatment for mental illness is to acknowledge that there is a problem and to be comfortable enough to disclose this information to another person (Rickwood et al, 2005). However, in another 2010 study of adolescents, many expressed fear of being bullied by peers, subsequently leading to secrecy and shame (Kranke et al., 2010). The role of public stigma in generating this shame and denial is significant and thus can be defined as a factor in preventing adolescents from seeking support for their mental health problems. A 2001 study testing the relationship between adherence to medication (in this case, antidepressants) and perceived stigma levels determined that individuals who accepted the antidepressants were found to have lower perceived stigma levels (Sirey et al, 2001). This empirical data clearly illustrates the correlation between public stigma levels and an individual’s engagement in treatment, thus inferring that stigma remains a barrier for treatment. Public stigma can therefore be defined as a causative factor in the majority of adolescents not seeking support or treatment for their mental health problems.

One of the main strategies performed by society to assist in the reduction of the public stigma surrounding mental illness is education. Educating people about the common misconceptions of mental health challenges the inaccurate stereotypes and substitutes them with factual information (Corrigan et al., 2012). There is sufficient proof that people who have more information about mental health problems are less stigmatizing than people who are misinformed about them (Corrigan & Penn, 1999). The low cost and far-reaching nature are beneficial aspects of the educational approach. Educational approaches are often carried out on adolescents as it is believed that by educating children about mental illness, stigma can be prevented from emerging in adulthood (Corrigan et al., 2012). A 2001 study testing the effect of education on 152 students found that levels of stigmatization were lessened following the implementation of the strategy (Corrigan et al, 2001). However, it was also determined that by combining a contact based approach with the educational strategy would yield the highest levels of stigma reduction. Studies have also shown that a short educational program can be effective at reducing individuals’ negative attitudes toward mental illness and increases their knowledge on the issue (Corrigan & O’Shaughnessy, 2007). The effect of an educational strategy varies depending on what type of information is being communicated towards people. The information provided should deliver realistic descriptions of mental health problems and their causes as well as emphasizing the benefits of treatment. By delivering accurate information to people, the negative stereotypes surrounding mental illness can be decreased and the publics views on the controllability and treatment of psychological problems can be altered (Britt et al, 2007). Educational approaches mainly focus on improving knowledge and attitudes surrounding mental illness and do not focus directly on changing behavior. Therefore, a link cannot be clearly made as to whether educating people actually reduces discrimination. Although this remains a major limitation in today’s society, educating people at an early age can ensure that in the future discrimination and stigmatization will decrease. Reducing the negative attitudes surrounding mental illness can encourage those suffering from mental health problems to seek help. Providing individuals with correct information regarding the mechanisms and benefits of treatment, such as psychotherapy or drugs like antidepressants, increases their own mental health literacy and therefore increases the likelihood of seeking treatment (Jorm and Korten, 1997). People who are educated about mental health problems are less likely to believe or generate stigma surrounding mental illnesses and therefore contribute to reducing stigma which in turn will increase levels of successful treatment for themselves or other individuals.

The public stigma surrounding mental health problems is defined by negative attitudes, prejudice and discrimination. This negativity in society is very debilitating towards any individual suffering from mental illness and creates a barrier for seeking out help and engaging in successful treatment. The negative consequences of public stigma for individuals is to be excluded, not considered for a job or for friends and family to become socially distant. By educating people about the causes, symptoms and treatment of mental illnesses, stigma can be reduced as misinformation is usually a key factor in the promotion of harmful stereotypes. An individual will more likely engage in successful treatment if they are accepting of their illness and if stigma is reduced.

2016-10-9-1475973764

Frederick Douglass, Malcolm X and Ida Wells

Civil Rights are “the rights to full legal, social, and economic equality” . Following the American Civil War, slavery was officially abolished December 6th, 1865 in the United States of America (US). The Fourteenth and Fifteenth Amendments established a legal framework for political equality for African Americans; many thought that this would lead to equality between white and blacks however this was not the case. Despite slavery’s abolition Jim Crow racial segregation in the South meant that blacks would be denied political rights and freedoms and they would continue to live in poverty and inequality. It took nearly 100 years of campaigning until the Civil Rights and Voting Rights Acts were passed, making it illegal to discriminate based on race, colour, religion, sex or national origin and ensuring minority voting rights. Martin Luther King was prominent in the Modern Civil Rights Movement (CRM), playing a key role in legislative and social change. His assassination in 1968 marked the end of a distinguished life helping millions of African Americans across the US. The contribution played by black activists including political Frederick Douglass, militant Malcolm X and journalist Ida Wells throughout the period will be examined from a political, social and economic, perspective. When comparing their significance to that of King, consideration must be given to the time in which activists were operating and to prevailing social attitudes. Although King was undeniably significant it was the combined efforts of all the black activists and the mass protest movement in the mid-20th century that eventually led to African Americans gaining civil rights.

The significance of King’s role is explored through Clayborne Carson’s, ‘The Papers of Martin Luther King’ (Appendix 1). Carson, a historian at Stanford University, suggests that “the black movement would probably have achieved its major legislative victory without King’s leadership” Carson does not believe King was pivotal in gaining civil rights, but that he quickened the process. The mass public support shown in the March on Washington, 1963, suggests that Carson is correct in arguing that the movement would have continued its course without King. However, it was King’s oratory skill in his ‘I have a Dream’ speech that was most significant. Carson suggests key events would still have taken place without King. “King did not initiate…” the Montgomery bus boycott rather Rosa Parks did. His analysis of the idea of a ‘mass movement’ furthers his argument of King’s less significant role. Carson suggests that ‘mass activism’ in the South resulted from socio-political forces rather than ‘the actions of a single leader’. King’s leadership was not vital to the movement gaining support and legislative change would have occurred regardless. The source’s tone is critical of his significance but passive in the dismissal of King’s role. Phrases such as “without King” are used to diminish him in a less aggressive manner. Carson, a civil rights historian with a PhD from UCLA has written books and documentaries including ‘Eyes on the Prize’ and so is qualified to judge. The source was published in 1992 in conjunction with King’s wife, Coretta, who took over as head of the CRM after King’s assassination and extended its role to include women’s rights and LGBT rights. Although this may make him subjective, he attacks King’s role suggesting he presents a balanced view. Carson produced his work two decades after the movement and three decades before the ‘Black Lives Matter’ marches of the 21st century, and so was less politically motivated in his interpretation. The purpose of his work was to edit and publish the papers of King on behalf of The King Institute to show King’s life and the CRM he inspired. Overall, Carson argues that King had significance in quickening the process of gaining civil rights however he believes that without his leadership, the campaigning would have taken a similar course and that US mass activism was the main driving force.

In his book ‘Martin Luther King Jr.’ (Appendix 2) historian Peter Ling argues, like Carson, that King was not important to the movement but differs suggesting it was other activists who brought success and not mass activism. Ling believes that ‘without the activities of the movement’ King might just have been another ‘Baptist preacher who spoke well.’ It can be inferred that Ling believes King was not vital to the CRM and was just a good orator.

Ling’s reference to activist Ella Baker 1903-86 who ‘complained that “the movement made Martin, not Martin the Movement”’ suggests the King’s political career was of more importance to him than the goal of civil rights. Baker told King she disapproved of his being hero worshipped and others argued that he was ‘taking too many bows and enjoying them’. Baker promoted activists working together, as seen through her influence in the Student Nonviolent Coordinating Committee (SNCC). Clearly many believed King was not the only individual to have an impact on the movement, and so Ling’s argument that multiple activists were significant is further highlighted.

Finally, Ling argues that ‘others besides King set the pace for the Civil Rights Movement’ which explicitly shows how other activists working for the movement were the true heroes, they orchestrated events and activities yet it was King that benefitted. However King himself suggested that he was willing to use successful tactics suggested by others. The work of activists such as Philip Randolph who organise the 1963 March highlights how individuals played a greater role in moving the CRM forward than King. The tone attacks King using words such as ‘criticisms’ to diminish King’s role. Ling says that he has ‘sympathy’ for Miss Baker showing his positive tone towards other activists.

Ling was born in the UK studying History at Royal Holloway College and a MA in American Studies, Institute United States Studies, London. This gives Ling an international perspective, making him less subjective as he has no political motivations nevertheless this makes his interpretation limited in that he has no primary knowledge of civil rights in the US. The book was published in 2002 consequently this gives Ling hindsight making his judgment more accurate and less subjective as he is no longer affected by King’s influence. Similarly, his knowledge of American history and the CRM makes his work accurate. Unlike Carson who was a black activist and attended the 1963 March, White Ling was born in 1956 and was not involved with the CRM and so will have a less accurate interpretation. A further limitation is his selectivity; he gives no attention to the successes of King, including his inspiring ‘I had a dream speech’. As a result, it is not a balanced interpretation and thus its value is limited.

Overall, although weaker than Carson’s interpretation, Ling does give an argument that is of value when understanding King’s significance. Both revisionists, the two historians agree that King was not the most significant reason to gaining civil rights however differ on who or what they see as more important. Carson argues that mass activism was vital in success whereas Ling believes it to be other activists.

A popular pastor in the Baptist Church, King was the leader of the CRM when it gained black rights successes in the 1960s. He demonstrated the power of the church and NAACP in the pursuit of civil rights His oratory skills ensured many blacks and whites attended the protests and increased support. He understood the power of the media in getting his message to a wide audience and in putting pressure on the US government. The Birmingham campaign 1963, where peaceful protestors including children were violently attacked by police and his inspirational ‘Letter from Birmingham Jail’ that King wrote were heavily publicised. US society gradually sympathised with the black ‘victims’. Winning the Nobel Peace Prize gained the movement further international recognition. King’s leadership was instrumental in the political achievements of the CRM, inspiring the grassroots activism needed to apply enough pressure on government, which behind the scenes activists like Baker had worked tirelessly to build. Nevertheless there had been a generation of activists who played their parts often through the church publicising the movement, achieving early legislative victories and helping to kick-start the modern CRM and the idea of nonviolent civil disobedience. King’s significance is that he was the figurehead of the movement at the time when civil rights were eventually given.

Pioneering activist Frederick Douglass 1818-95 had political significance to the CRM holding federal positions which enabled him to influence government and Presidents throughout the Reconstruction era. He is often called the ‘father of the civil rights movement’. Douglass held several prominent roles including US Marshall for DC. He was the first black to hold high office in government and in 1872 the first African American nominated for US Vice President particularly significant as blacks’ involvement in politics was severely restricted at the time. Like King he was a brilliant orator, lecturing on civil rights in the US and abroad. When compared to King Douglass was significant in the CRM. He promoted equality for blacks and whites, although unlike King he did not ultimately achieve black civil rights this was because he was confined by the era that he lived.

The contribution of W.E.B Du Bois 1868-1963 was significant as he laid the foundations for future black activists, including King, to build. In 1909 he established The National Association for the Advancement of Coloured People (NAACP) the most important 20th century black organisation other than the church. King became a member of NAACP and used it to organise the bus boycott and other mass protests. As a result, the importance of Du Bois to the CRM is that King’s success depended on NAACP therefore Du Bois is of similar significance, if not more so than King in pursuing black civil rights.

Ray Stanard Baker’s article in 1908 for The American Magazine speaks of Du Bois’ enthusiastic attitude to the CRM, his intelligence and knowledge of African Americans. (Appendix 3) The quotation of Du Bois at the end of the extract reads “Do not submit! agitate, object, fight,” showing he was not passive but preaching messages of rebellion. The article describes him with vocabulary such as “critical” and “impatient” showing his radical passionate side. Baker also states Du Bois’ contrasting opinions compared to Booker T Washington one of his contemporary black activists. This is evident when it says “his answer was the exact reverse of Washington’s” demonstrating how he was different to the passive, ‘education for all’ Washington. Du Bois valued education, but believed in educating an elite few, the ‘talented tenth’ who could strive for rapid political change. The tone is positive towards Du Bois praising him for being a ferocious character dedicated to achieving civil rights. Through phrases such as “his struggles and his aspirations” this dedicated and praising tone is developed. The American Magazine founded in 1906 was an investigative US paper. Many contributors to the magazine were ‘muckraking’ journalists meaning that they were reformists who attacked societal views and traditions. As a result, the magazine would be subjective, favouring radical Du Bois’, challenging the Jim Crow South and appealing to its radical target audience. The purpose of the source was to confront the racism in the US and so would be political motivated making it subjective regarding civil rights. However some evidence suggests that Du Bois was not radical, his Paris Exposition in 1900 showed the world real African Americans. Socially he made a major contribution to black pride contributing to the black unity felt during the Harlem Renaissance. The Renaissance popularised black culture and so was a turning point in the movement, in the years after the CRM grew in popularity and became a national issue. Finally, the source refers to his intelligence and educational prowess; he carried out economic studies for the US Government and was educated at Harvard and abroad. As a result, it can be inferred that Du Bois rose to prominence and made a significant contribution to the movement due to his intelligence and his understanding of US society and African American culture. One of the founders of the NAACP his significance in attracting grassroots activists and uniting black people was vital. The NAACP leader Roy Wilkins at the March on Washington highlighted his contribution following his death the day before, and said, “his was the voice that was calling you to gather here today in this cause.” Wilkins is suggesting that Du Bois had started the process which had led to the March.

Rosa Parks 1913-2005 and Charles Houston 1895-1950 were NAACP activists who benefitted from the work of Du Bois and achieved significant political success in the CRM. Parks the “Mother of the Freedom Movement.” was the spark that ignited the modern CRM by protesting on a segregated bus. Following her refusal to move to the black area she was arrested. Parks, King and NAACP members staged a yearlong bus boycott in Montgomery. Had it not been for Parks, King may never have had the opportunity to rise to prominence or had mass support for the movement and so her activism was key in shaping King. Lawyer Houston helped defend black Americans, breaking down the deep rooted discriminative and segregation laws in the South. It was his ground-breaking use of sociological theories that formed the basis of the Brown v. the Board of Education 1954 that ended segregation in schools. Although compared to King, Houston is less prominent; his work was significant in reducing black discrimination gaining him the nickname ‘The man who killed Jim Crow ‘. Nonetheless had Du Bois’ NAACP not existed, Parks and Houston would never have had an organisation to support them in their fight, likewise King would never have gained the mass support for civil rights.

Trade unionist Philip Randolph 1890-1979 brought about important political changes. His pioneering use of nonviolent confrontation had a significant impact on the CRM and was widely used throughout 1950’s and 60’s. Randolph had become a prominent civil rights spokesman after organising the Brotherhood of Sleeping Car Porters in 1925, the first black majority union. Mass unemployment after the US Depression led to civil rights becoming a political issue and US trade unions supported equal rights and black membership grew. Randolph was striving for political change that would bring equality. Aware of his influence in 1941 he threatened a protest march which pressured President Roosevelt into issuing Executive Order 8802 an important early employment civil rights victory. There was a shift in the direction of the movement focussing on the military because after the Second World War black soldiers felt disenfranchised and became the ‘foot soldiers of the CRM’ fighting for equality in these mass protests. Randolph led peaceful protests which resulted in President Truman issuing Executive Order 9981 desegregating of the Armed Forces showing his key political significance. Significantly this legislation was a catalyst leading to further desegregation laws. His contribution to the CRM, support of King’s leadership and masterminding of the 1963 March made his significance equal to King’s.

King realised that US society needed to change and inspired by Ghandi he too used non-violent mass protest to bring about change, including the Greensboro Sit-ins to de-segregate lunch counters. Similarly activist Booker T Washington 1856-1915 significantly improved the lives of thousands of southern blacks who were poorly educated and trapped in poverty following Reconstruction through his pioneering work in black education. He founded the Tuskegee Institute. In his book ‘Up from Slavery: An Autobiography’ (Appendix 4) he suggests that gaining civil rights would be difficult and slow, but all blacks should work on improving themselves through education and hard work to peacefully push the movement forward. He says that “the according of the full exercise of political rights” will not be an “overnight gourdvine affair” and that a black should “deport himself modestly in regard of political claim”. Inferring that Washington wanted peaceful protest and acknowledged the time it would take to gain equality, making his philosophy like King’s. Washington’s belief in using education to gain the skills to improve lives and fight for equality is evident through the Tuskegee Institute which educated 2000 blacks a year.

The tone of the source is peaceful, calling for justice in the South. Washington uses words such as “modestly” in an attempt for peace and “exact justice” to show how he believes in equal political rights for all. The reliability of the source is mixed. Washington is subjective as he wants his autobiography to be read, understood and supported. The intended audience would have been anyone in the US, particularly blacks whom Washington wanted to inspire to protest and white politicians who would advance civil rights. The source is accurate, it was written in 1901, during the Jim Crow South. Washington would have been politically motivated in his autobiography; demanding legislative change to give blacks civil rights. There would have also been an educational factor that contributed to his writing, his Tuskegee Institute and educational philosophy, having a deep impact on his autobiography.

The source shows how and why the unequal South should no longer be segregated. Undoubtedly significant, as his reputation grew he became an important public speaker and is considered to have been a leading spokesman for black people and issues like King. An excellent role model a former slave who influenced statesmen he was the first black to dine with the President (Roosevelt) at the White House showing blacks they could achieve anything. Activist Du Bois described him as “the one recognised spokesman of his 10 million fellows … the most striking thing in the history of the American Negro”. Although not as decisive in gaining civil rights as King, Washington was important in preparing blacks for urban and working life but also empowering the next generation of activists.

Inspired by Washington the charismatic Jamaican radical activist Marcus Garvey 1880-1940 arrived in the US in 1916. Garvey had a social significance to the movement striving to better the lives of US blacks. He rose to prominence during the ‘Great Migration’ when poor southern blacks were moving to the industrial North, making Southern race problems into national ones. He founded the Universal Negro Improvement Association (UNIA) which had over 2,000,000 members in 1920. He appealed to discontented First World War black soldiers who had returned home to violent racial discrimination. The importance of the First World War was paramount in enabling Garvey to gain the vast support he did in the 1920s. Garvey published a newspaper, the Negro World which spread his ideas about education and Pan-Africanism, the political union of all people of African descent. Garvey like King gained a greater audience for the CRM, in 1920 he led an international convention in Liberty Hall, and 50,000 parade through Harlem. Garvey inspired later activists such as King.

2018-7-12-1531405547

Reflective essay on use of learning theories in the classroom: college application essay help

Over recent years teaching theories have been more common in the class room, all in the hope of supporting students and been able to further their knowledge by understanding their abilities and what they need to develop. As a teacher it is important to embed teaching and learning theories in the class room, therefore as teachers we can teach the students to their individual needs.

Throughout my research I will be looking in to the key differences of two different theories by comparing two theories used in class rooms today. I will also be critically analysing what the role of the teacher is in the life-long learning sector, by analysing the professional and legislative frameworks, as well as looking for a deeper understanding into classroom management, and why it is used and how to manage different class room environments, such as managing inclusion and how it is supported throughout different methods.

Overall, I will be linking this to my own teaching, at A Mind Apart (A Mind Apart, 2019). Furthermore, I will have the ability to understand about interaction within the classroom and why communication between fellow teachers and students is important.

The role of the teacher is known for been the forefront of knowledge. Therefore, this suggest that the role of the teacher is to pass their knowledge on to their students, known as a ‘chalk and talk’ approach, although this approach is outdated and there are various ways we now teach in the classroom. Walker believes that, ‘the modern teacher is facilitator: a person who assists students to learn for themselves’ (Reece & Walker 2002) I for one cannot say I fully believe in this approach, as all students have individual learning needs, and some may need more help than others. As the teacher, it is important to know the full capability of your learners, therefore lessons can be structure to the learner’s need. It is important for the lessons to involve active learning and discussions, these will help keep the students engaged and motivated during class. Furthermore, it is important to not only know what you want the students the be learning, but it is just as important that you know as the teacher, what you are teaching; it is important to be prepared and be fully involved in your own lesson, before you go in to any class, as a teacher I make my students my priority, therefore, I leave any personal issues outside the door so I am able to give my students the best learning environment they could possibly have; not only is it important to do this but keep updated on your subject specialism, I always double check my knowledge of my subject regularly, I find following this structure my lesson will normally run at a smooth pace.

Taking in to consideration the students I teach are vulnerable there may be minor interruptions. It is not only important that you as the teacher leave your issues at the door, but to make sure the room is free from distractions, most young adults have a lot of situations which are they find hard to deal with, which means you as the teacher are not only there to educate but to make the environment safe and relaxing for your students to enjoy learning. As teachers we not only have the responsibility of making sure the teaching takes place, but we also have the responsibilities of exams, qualifications and Ofsted; and as a teacher in the life-long learning sector it is also vital that you evaluate not only your learner’s knowledge, but you evaluate yourself as a teacher, therefore, you are able to improve your teaching strategies and keep up to date.

When assessing yourself and your students it is important not to wait until the end of a term to do this and evaluate throughout the whole term. Small assessments are a good way of doing this, it doesn’t always have to be a paper examination, you can equally you can do a quiz, ask questions, use various fun games, or even use online games such as Kahoot to help your students regain their knowledge. This will not only help you as a teacher understand your students’ abilities, but it will also help your students know what they need to work on for next term.

Alongside the already listed roles and responsibilities of being a teacher in the life-long learning sector, Ann gravels explains that,

‘Your main role as a teacher should be to teach your students in a way that actively involves and engages your students during every session’ (Gravells, 2011, p.9.)

Gravels passion is solely based on helping new teachers, gain the knowledge and information they need to become successful in the lifelong learning sector. Gravels’ has achieved this by writing various text books on the lifelong learning sector. Gravels’ states in her book ‘Preparing to teach in the lifelong learning sector’, (Gravells, 2011) the importance of the 13 legislation acts. Although I find each of them equally important as each other, I am going to mention the ones I am most likely to use during my teacher training with A Mind Apart.

Safeguarding vulnerable groups act (2006) – Working with young vulnerable adults, I find this act is the one I am most likely to use during my time with A Mind Apart. In summary, the Act explains the following: ‘The ISA will make all decisions about who should be barred from working with children and vulnerable adults.’ (Southglos.gov.uk, 2019)
The Equality act (2010) – As I will be working with different sex, race and disabilities in any teaching job which I encounter, I believe The Equality act (2010) is fundamental to mention. The Equality act 2010 covers discrimination under one legalisation.
Code of professional practice (2008) – This act covers all aspects of the activities we as teachers in the lifelong learning sector may encounter. Based around seven behaviours which are: Professional practice, professional integrity, respect, reasonable care, criminal offence disclosure, and reasonability during institute investigations.

(Gravells, 2011)

Although, all acts are equally important, those are the few acts I would find myself using regularly. I have listed the others below:

Children act (2004)
Copyright designs and patents act (1988)
Data protection act (1998)
Education and skills act (2008)
Freedom of information act (2000)
Health and safety at work act (1974)
Human rights act (1998)
Protection of children act (POCA) (1999)
The Further education teachers’ qualification regulations (2007)

(Gravells, 2011)

Teaching theories are much more common in classrooms today, however there are three main teaching theories which us as teachers are known for using in the classroom daily. Experiments show that we find the following theories work the best: behaviourism, cognitive constructivist, and social constructivist, taking these theories into consideration I will look at comparing skinners behaviourist theory and taking a look at Maslow (1987) ‘Hierarchy Of Needs’ which was introduced in 1954, and how I could use these theories in my teaching as a drama teacher in the life-long learning sector.

Firstly, looking in to behaviourism is mostly described as the teacher questioning and the student responds the way you want them to. Behaviourism is a theory, which in a way can take control of how the student acts/behaves, if used to its full advantage. Keith Pritchard (Language and Learning, 2019) describes behaviourism as ‘A theory of learning focusing on observable behaviours and discounting any mental activity. Learning is defined simply as the acquisition of a new behaviour.’ (E-Learning and the Science of Instruction, 2019).

An example of how behaviourism works, is best demonstrated through the work of Ivan Pavlov (Encyclopaedia Britannica, 2019) Pavlov was a physiologist during the start of the twentieth century and used a method called ‘conditioning’, (Encyclopaedia Britannica, 2019) which is a lot like the behaviourism theory. During Pavlov’s experiment, he ‘conditioned’ the dogs to make them salivate when they heard a bell ring, as soon as the dogs hear the bell, they associate it with getting fed. As a result of this the dogs were behaving exactly how Pavlov wanted them to behave, therefore they had successfully been ‘conditioned’. (Encyclopaedia Britannica, 2019)

During Pavlov’s conditioning experiment there are four main stages in the process of classical conditioning, these include,

Acquisition, which is the initial learning;
Extinction, meaning the dogs in Pavlov’s experiment may not respond, if no food is presented to them;
Generalisation, after learning a response, the dog may now respond to other stimuli, with no further training. For example: if a child falls off a bike, a injures their self, they may be frightened to get back on to the bike again. And lastly,
Discrimination, which is the opposite of generalisation, for example the dog will not respond in the same way to another stimulus as they did the first one.

Pritchard states ‘It Involves reinforcing a behaviour by rewarding it’ which is what Pavlov’s dog experiment does. Although rewarding behaviour can be good, it can also be negative, such as bad behaviour can be discouraged by punishment. The key aspects of conditioning are as follows: Reinforcement, Positive reinforcement, Negative reinforcement, and shaping. (Encyclopaedia Britannica, 2019)

Behaviourism is one of the learning theories I use in my teaching today, working at A Mind Apart, (A Mind Apart, 2019) I work with challenging young people. The A Mind Apart organisation, a performing arts foundation especially targeted at vulnerable and challenging young people, to help better their lives; hence, on the off chance that I use the behaviourism theory it will admirably inspire the students to do better. Using behaviourism with respect to the standard of improvement and reaction, behaviourism is driven by the teacher and is responsible for how the student will carry on and how it is finished. This theory came around in the early twentieth century and concentrated how individuals behave; with respect to the work I do at A Mind Apart, as a trainee performing arts teacher, I can identify with behaviourism limitlessly, every Thursday, when my 2 hour class is finished, I at that point take 5 minutes out of my lesson to award a ‘Star of the week’ It is an incredible method to urge students to carry on the way they have been, if behaving and influence them to endeavour towards something ion the future. Furthermore, I have discovered that this theory can function admirably in any expert subject and not just performing arts. The behaviourism theory is straightforward as it depends just on detectable conduct and portrays a few widespread laws of conduct. It’s positive and negative support strategies can be extremely effective. The students who we teach in general at A Mind Apart, are destined to come to us with emotional well-being issues, which is the reason most of the time these students find that it is hard to focus, or even learn in a school environment; we are there to give a comprehensive learning environment and utilize the time we have with them, so they can move forward at their own pace and take a leap at their scholarly aptitudes and socialising in the future when they leave us, to move on to college or even jobs, our work with them will also help them meet new individuals, and gain new useful knowledge by using behaviourism teaching theory. Despite the fact some of them may struggle with obstacles during their lives; although it is not always easy to manipulate someone in to thinking or behaving the way you do or want them to, with time, and persistence I have found that this theory can work. It is known that…

‘Positive reinforcement or rewards can include verbal feedback such as ‘That’s great, you’ve produced that document without any errors’ or ‘You’re certainly getting on well with that task’ through to more tangible rewards such as a certificate at the end’… (Information List of topics Assessment Becoming a teacher Continuing Professional Development (CPD) Embedding maths et al., 2019)

Gagne (Mindtools.com, 2019) was an American educational psychologist best known for his nine levels of learning; Regarding Gagne’s nine levels of learning, (Mindtools.com, 2019) I have done something in depth research, in just a couple of his nine levels of learning therefore I will be able to understand the levels and how his theory link to behaviourism.

Create an attention-grabbing introduction.
Inform learner about the objectives.
Stimulate recall of prior knowledge.
Create goal-centred eLearning content.
Provide online guidance.
Practice makes perfect.
Offer timely feedback.
Assess early and often.
Enhance transfer of knowledge by tying it into real world situations and applications.

(Mindtools.com, 2019)

Informing the learner of the objectives, is the one level I can relate to the most during my lessons, I find it important in many ways why you as the teacher, should let your students know what they are going to be learning during that specific lesson. This will help them have a better understanding throughout the lesson, as even more engage them from the very start. Linking it to behaviourism during my lessons, I tell my students what I want from them that lesson, and what I expect them, with their individual needs, to be learning or have learnt by the end of lesson. If I believe learning has taking place during my lesson, I will reward them with a game of their choice at the end of the lesson. In their mind they understand they must do as they are asked by the teacher, or the reward to play a game at the end of lesson, will be forfeited. As studies show, during Pavlov’s (E-Learning and the Science of Instruction, 2019) dog experiment that this theory does work, it can take a lot of work. I have built a great relationship with my students, and most of the time they are willing to work to the best of their ability.

Although Skinners’ (E-Learning and the Science of Instruction, 2019) behaviourist theory is based around manipulation, Maslow’s ‘Hierarchy Of Needs’ (Very well Mind, 2019) believes that behaviour and the way people act is based upon childhood events, therefore it is not always easy to manipulate in to the way you think, as they may have had a completely different upbringing, which will determine how they act. Maslow (Very well Mind, 2019) feels, if you remove the obstacles that stop the person from achieving, then they will have a better chance to achieve their goals; Maslow argues that there are five different needs which must be met in order to achieve this. The highest level of needs is self-actualisation which means the person must take full reasonability for their self, Maslow believes that people can go through to the highest levels, if they are in an education which can produce growth. Below is the table of Maslow’s ‘Hierarchy of needs’ (Very well Mind, 2019)

(Information List of topics Assessment Becoming a teacher Continuing Professional Development (CPD) Embedding maths et al., 2019)

In an explanation the table lets you know your learners needs throughout different levels, during their time in your learning environment, all learners may be at different levels, but should be able to progress on to the next one when they feel comfortable to do so. There may be knockbacks which your learners as individuals will face, but is the needs that will motivate the learning, although you may find that not all learners want to progress through the levels of learning at that moment in time, for example, if your learner if happy with the progress they have achieved so far and are content with life, they may find they want to stay at that certain level.

It is important to use the levels to encourage your learners by working up the table.

Stage 1 of the table is the physiological need – are your learners comfortable in the environment you are providing, are they hungry or thirsty? Your learners may even be tired; taking all these factors in to consideration, these may stop learning taking place. Therefore, it is important to meet all your learners’ physiological needs.

Moving up the table to safety and security – make your learners feel safe in an environment where they can relax, feel at ease. Are your learners worried about anything in particular? If so, can you help them overcome their worries.

Recognition – do your learners feel like they are part of the group? It is important to help those who don’t feel that they are part of the group bond with others. Help your learners belong and make them feel welcome. One recognition is in place your learners will then start to build their self-esteem, are they learning something useful, although your subject specialism may be second to none, it is important that your passion and drive shines through your teaching; overall this will result in the highest level: Self actualisation, are your learners achieving what they want to do? Make the sessions interesting and your learners will remember more about the subject in question. (Very well Mind, 2019)

Furthermore, classroom management comes in to force with any learning theory you use whilst teaching. Classroom management is made up of various techniques and skills that we as teacher utilize. Most of today’s classroom management systems are highly effective as they increase student success. As I am now a trainee teacher, I understand that classroom management can be difficult at times, therefore I am always researching different methods on how to manage my class. Although I don’t believe entirely that this comes from just methods, but if your pupils respect you as a teacher, and they understand what you expect of them whilst in your class, you should be able to manage the class fine; relating this with my placement at A Mind Apart, my students know what I expect of them and from that my classroom management is normally good…following this there are a few classroom management techniques I tend to follow:

Demonstrating the behaviour, you want to see – eye contact whilst talking, phones away in bags/coats, listen when been spoken to and be respectful of each other, these are all good codes of conduct to follow, and they are my main rules whilst in the classroom.
Celebrating hard work or achievements – When I think a student has done well, we as a group will celebrate their achievement, whether It be in education or out, a celebration always helps with classroom management.
Make your session engaging and motivating – This is something all us trainee teachers find difficult within our first year, as I have found out personally from the first couple of months, I have learnt to get to know your learners, understand what they like to do, and what activity’s keep them engaged.
Build strong relationships – I believe having a good relationship with your students is one of the key factors to managing a class room. It is important to build trust with your students, make them feel safe and let them know they are in a friendly environment.

When it comes to been in a classroom environment, not all students will adhere to this, therefore they may require a difference kind of structure to feel included. A key example of this is students with physical disabilities, you may need to adjust the tables or even move them out the way, you could also adjust the seating so a student may be able to see more clearly if they have hearing problems maybe write more down on the board, or even give them a sheet at the start of the lesson, which lets them know what you will be discussing and any further information they may need to know, not only do you need to take physical disabilities in to consideration but it is also important to cater for those who have behavioural problems, it is important to adjust the space to make your students feel safe whilst in your lesson.

Managing your class also means that sometimes you may have to adjust your teaching methods to suit all in your class and understand that it is important to incorporate cultural values. Whilst in the classroom, or even giving out home work you may need to take in to consideration that some students, especially those with learning difficulties, may take longer to do work, or even need additional help.

Conclusion

Research has given me a new insight into how many learning theories, teaching strategies and classroom management strategies there are, there are books and websites which help you achieve all the things you need to be able to do in your classroom. Looking back over this essay I looked in to the two learning theories that I am most likely to use.

2019-1-7-1546860682

Synchronous and asynchronous remote learning during the Covid-19 pandemic

Student’s Motivation and Engagement

Motivation plays an important role in student engagement. Saeed and Zyngier (2012) contend that in order to assess student motivation, researchers should also have to examine engagement in and as part of learning. This manifests that there is a relationship between student motivation and engagement. As support to this relationship, Hufton, Elliot, and Illushin (2002) believe that high levels of engagement show high levels of motivation. In other words, when the levels of motivation of students are high that is when their levels of engagement are also high.

Moreover, Dörnyei (2020) suggests that the concept of motivation is closely associated with engagement, and with this he asserted that motivation must be ensured in order to achieve student engagement. He further offered that any instructional design should aim to keep students engaged, regardless of the learning context, may it be traditional or e-learning. In addition, Lewis et al (2014) reveal that within the online educational environment, students can be motivated by delivering an engaging student-centered experience consistently.

In the context of Student-Teacher Dialectical Framework embedded with Self-Determination Theory, Reeve, J. (2012) reveal three newly discovered functions of student engagement. First, is that engagement bridges students’ motivation to highly valued outcomes. Second, is that student engagement affects the future quality of learning environment especially in the flow of instruction, the external events it has, and the teacher’s motivating style. Third, is that student engagement changes motivation, which means that engagement cause changes in motivation in the future. This highlights that student motivation is both a cause and a consequence. This assertion that engagement can cause changes motivation is embedded on the idea that students can take actions to meet their own psychological needs and enhance the quality of their motivation. Further, Reeve, J. (2012) asserts that students can be and are architects of their own motivation, at least to the extent that they can be architects of their own course-related behavioral, emotional, cognitive, and agentic engagement.

Synchronous and Asynchronous Learning

The COVID-19 pandemic brought a great disaster on the education system around the world. Schools have struggled due to the situation in which led them to cessation of classes for an extended period of time and other restrictive measures that later on impede the continuance of face-to face classes. In consequence, there is a massive change towards the educational system around the world while educational institutions strive and put their best efforts to resolve the situation. Many schools had addressed the risks and challenges in continuing education amidst the crisis by shifting conventional or traditional learning into distance learning. Distance learning is a form of education through the support of technology that is conducted beyond physical space and time (Papadopulou, 2020). Distance learning is an online education that provides opportunities towards educational advancement and learning development among learners worldwide. In order to sustain the educational goal of our country, distance learning is a new way of providing quality education as much as possible among public and private institutions especially to those pursing in higher education. The instructional delivery in considering distance education can be through synchronous or asynchronous mode of learning, in which students can engage and continually attain quality education despite of the pandemic situation.

Based on the definition of Easy LMS Company (2020), synchronous learning refers to a learning event in which a group of participants is engaged in learning at the same time (e.g., zoom meeting, web conference, real- time class) while asynchronous learning refers to the opposite, in which the instructor, the learner, and other participants are not engaged in the learning process at the same time. Thus, there is no real-time interaction with other people (e.g., pre-recorded discussions, self- paced learning, discussion boards). According to article issued by University of Waterloo (2020), synchronous learning is a form of learning that is live presentation which allows the students to ask questions while asynchronous can be a recorded presentation that allows students to have time in reflecting before asking questions. Synchronous learning is a typical meeting of students in a virtual setting and there is a class discussion where everybody can participate actively. Asynchronous learning is the utilization of learning platform or portal where the teachers or instructors can post and update lessons or activities and student can work at their own pace. These types of class instruction are commonly observed in these times and students have their own preferences when it comes to what best works for them.

In comparing both of the types of learning, it is valuable to know the advantages and disadvantages in order to see how it will really be an impact towards students. Wintemute (2021) discussed synchronous learning has greater engagement and direct communication is present, but it requires strong internet connection. On the other hand, asynchronous learning is advantageous in schedule flexibility and more accessible, yet it is less immersive and the challenges in procrastination, socialization and distraction are present. Students in synchronous learning tend to adapt the changes of learning with classmates in a virtual setting while asynchronous learning introduced a new setting where students can choose when to study.

In the middle of the crisis, asynchronous learning can be more favorable than synchronous because most of us are struggling in this pandemic. One of the principal advantages of asynchronous online learning is that it offers more flexibility, allowing learners to set their own schedule and work at their own pace (Anthony and Thomas, 2020). In contrast, synchronous learning allows students to feel connected in a virtual world and it can give them assurance of not being isolated amidst studying lessons because they can have a live interactions and exchange of ideas and other valuable inputs for the class to understand the lessons well by the help of teachers. The main advantages of synchronous learning are that instructors can explain specific concepts when students are struggling and students can also get immediate answers about their concerns in the process of learning (Hughes, 2014). In the article of Delgado (2020), the advantages and disadvantages will not be effective if they do not have a pedagogical methodology considering the technology and its optimization. Furthermore, the quality of learning depends on good planning and design by reviewing and evaluating each type of learning modality.

Synthesis

Motivating students has been a key challenge facing instructors in the contexts of online learning (Zhao et. al 2016). In which motivation is one of the bases of the student to do well in their studies. When students are motivated, the outcome is a good mark. In short, motivation is a way to pushed them study more to get high grades. According to Zhao (2016) motivation in an online learning environment revealed that there are learning motivation differences among students from different cultural backgrounds. Motivation is described as “the degree of people’s choices and the degree of effort they will put forth” (Keller, 1983). Learning is closely linked to motivation because it is an active process that necessitates intentional and deliberate effort. Educators must build a learning atmosphere in which students are highly encouraged to participate both actively and productively in learning activities if they want to get the most out of school (Stipek, 2002). John Keller (1987) in his study revealed that attention and motivation will not be maintained unless the learner believes the teaching and learning are relevant. According to Zhao (2016), a strong interest in a topic will lead to mastery goals and intrinsic motivation.

Engagement can be perceived with the interaction between students and teachers in online classes. Student engagement, according to Fredericks et al. (2004), is a meta-construct that includes behavioral, affective, and mental involvement. Despite the fact that there is a broad body of literature on behavioral (i.e., time on task), emotional (i.e., interest and value), and cognitive engagement (i.e., self-regulation and learning strategies), what sets engagement apart is its capacity as a multifaceted strategy. While there is substantial research on behavioral (i.e., time on task), emotional (i.e., interest and value), and cognitive engagement (i.e., self-regulation and learning strategies what distinguishes engagement is its ability as a multidimensional or “meta”-construct that encompasses all three dimensions.

Motivation plays an important role in student engagement. Saeed and Zyngier (2012) contend that in order to assess student motivation, researchers should also have to examine engagement in and as part of learning.

Lewis et al (2014) reveal that within the online educational environment, students can be motivated by delivering an engaging student-centered experience consistently.

In the context of Student-Teacher Dialectical Framework embedded with Self-Determination Theory, Reeve, J. (2012) reveal three newly discovered functions of student engagement. First, is that engagement bridges students’ motivation to highly valued outcomes. Second, is that student engagement affects the future quality of learning environment especially in the flow of instruction, the external events it has, and the teacher’s motivating style. Third, is that student engagement changes motivation, which means that engagement cause changes in motivation in the future. Distance learning is an online education that provides opportunities towards educational advancement and learning development among learners worldwide. In order to sustain the educational goal of our country, distance learning is a new way of providing quality education as much as possible among public and private institutions especially to those pursing in higher education. The instructional delivery in considering distance education can be through synchronous or asynchronous mode of learning, in which students can engage and continually attain quality education despite of the pandemic situation.

According to article issued by University of Waterloo (2020), synchronous learning is a form of learning that is live presentation which allows the students to ask questions while asynchronous can be a recorded presentation that allows students to have time in reflecting before asking questions. Synchronous learning is a typical meeting of students in a virtual setting and there is a class discussion where everybody can participate actively. Asynchronous learning is the utilization of learning platform or portal where the teachers or instructors can post and update lessons or activities and student can work at their own pace. These types of class instruction are commonly observed in these times and students have their own preferences when it comes to what best works for them.

In comparing both of the types of learning, it is valuable to know the advantages and disadvantages in order to see how it will really be an impact towards students. Wintemute (2021) discussed synchronous learning has greater engagement and direct communication is present, but it requires strong internet connection. On the other hand, asynchronous learning is advantageous in schedule flexibility and more accessible, yet it is less immersive and the challenges in procrastination, socialization and distraction are present.

In the middle of the crisis, asynchronous learning can be more favorable than synchronous because most of us are struggling in this pandemic. One of the principal advantages of asynchronous online learning is that it offers more flexibility, allowing learners to set their own schedule and work at their own pace (Anthony and Thomas, 2020). In contrast, synchronous learning allows students to feel connected in a virtual world and it can give them assurance of not being isolated amidst studying lessons because they can have a live interactions and exchange of ideas and other valuable inputs for the class to understand the lessons well by the help of teachers.

2022-1-8-1641647078

‘Peak Oil’ – what are the solutions?

The ability to harness energy sources and put them towards a productive use has played a crucial role in economic development worldwide. Easily accessible oil helped to fuel continued expansion in the 20th century. Agricultural production was transformed by motorised farm equipment and petroleum-based fertilisers and pesticides. Cars, trucks and airplanes powered by oil products revolutionised the transportation of people and goods. Oil provides fuel for home heating, electricity production, and to power industrial and agricultural equipment. It also provides the source material for the construction of plastics, many fertilisers and pesticides and many industrial chemicals and materials. It is now difficult to find any product that does not require the use of oil at some point in the production process.

Oil has several advantages over other fossil fuels: it is easily transportable and energy-dense, and when refined it is suitable for a wide variety of uses. Considering the important role that oil plays in our economy, if persistent shortages were to emerge, the economic implications could be enormous. However, there is no consensus as to how seriously the treat of oil resources depletion should be taken. Some warn of a colossal societal collapse in the not-too-distant future, while others argue that technological progress will allow us to shift away from oil before resource depletion becomes an issue.

How much of a problem oil depletion poses depends on the amount of oil that remains accessible at reasonable cost, and how quickly the development of alternatives allows the demand for oil to be reduced. This is what the term ‘peak oil’ means the point of when the demand for oil outstrips the availability. Demand and supply each evolve over time following a pattern that is based in historical data, while supply is also constrained by resource availability. There is no mechanism for market on its own to address concerns about climate change. However, if policies are put in place to price the costs of climate change into the price of fossil fuel consumption, then this should trigger market incentives that should lead efficiently to the desired emission reductions.

A while ago the media was filed with stories about peak oil and it was even in an episode of the Simpsons. Peak oil in basic term means that the point we have used all the easy to extract oil and are only left with the hard to reach which in term is expensive to refine. There is still a huge amount of debate amongst geologist and Petro- industries experts about how much oil is left in the ground. However, since then the idea of a near-term peak in the world oil supplies has been discredited. The term that is now used is Peak Oil demand, the idea is that because of the proliferation of electric cars and other sources of energy means that demand for oil will reach a maximum and start to decline and indeed consumptions levels in some parts of the world have already begun to stagnate.

The other theory that has been produce is that with supply beginning to exceed demand there is not enough investment going into future oil exploration and development. Without this investment production will decline but production is not declining due to supply problems just that we are moving into an age of oil abundance and the decline in oil production seen if because of other factors. There has been an explosion of popular literature recently predicting that oil production will peak soon, and that oil shortages will force us into major lifestyle changes in the near future- a good example of this is Heinberg (2003). The point at which oil production reaches a peak and begins to decline permanently has been referred to as ‘Peak Oil’. Predictions for when this will occur range from 2007 and 2025 (Hirsch 2005)

The Hirsch Report of 2005 concluded that it would take a modern industrial nation such as the UK or the United States at least a full decade to prepare for peak oil. Since 2005 there has been some movement towards solar and wind power together with more electric cars but nothing that deals with the scale of the problem. This has been compounded by Trump coming to power in the United States and deciding to throw the energy transition into reverse, discouraging alternative energy and expanding subsidies for fossil fuels.

What is happening how

Many factors are reported in news reports to cause changes in oil prices: supply disruptions from wars and other political factors, from hurricanes or from other random events; changes in demand expectations based on economic reports, financial market events or even weather in areas where heating oil is used; changes in the value of the dollar; reports of inventory levels, etc. these are all factors that will affect the supply and demand for oil, but they often influence the price of oil before they have any direct impact on the current supply or demand for crude oil. Last year, the main forces pushing the oil market higher were the agreement by OPEC and its partners to lower production and the growth of global demand. This year, an array of factors are pressuring the oil markets: the US sanctions that threaten to cut Iranian oil production from Venezuela. Moreover, there are supply disruptions in Libya, the Canadian tar sands, Norway and Nigeria that add to the uncertainties as does erratic policymaking in Washington, complete with threats to sell off part of the US strategic reserve and a weaker dollar. Goldman Sachs continues to expect that Brent Crude prices could retest $80 a barrel this year, but probably only late in 2018. “production disruptions and large supply shifts driven by US political decisions are the drivers of this new volatility, with demand remaining robust so far” Brent Crude is expected to trade in the $70-$80 a barrel range in the immediate future.

The OPEC

Saudi Arabia-and Russia-had started to raise production even before the 22 June 2018 meeting with OPEC that sought to address the shrinking global oil supply and rising prices. OPEC had over-complying with the cuts agreed to at the November 2016 meeting thanks to additional cuts from Saudi Arabia and Venezuela. The June 2018 22nd meeting decided to increase production to more closely reflect the production cut agreement. After the meeting, Saudi Arabia pledged a “measurable” supply boost but gave no specific numbers. Tehran’s oil minister warned his Saudi Arabian counterpart that the June 22nd revision to the OPEC supply pact do not give member countries the right to raise oil production above their targets. The Saudis, Russia and several of the Gulf Arab States increased production in June but seem reluctant to expand much further. During the summer months, the Saudis always need to burn more raw crude in their power station to combat the very high temperatures of their summer.

US Shale oil production

According to the EIA’s latest drilling productivity Report, US unconventional oil production is projected to rise by 143,000 b/d in August to 7.470 billion b/d. The Permian Basin is seen as far outdistancing other shale basins in monthly growth in August, at 73,000 b/d to 3,406 million b/d. However, drilled but uncompleted (DUC) wells in the Permian rose 164 in June to 3,368, one of the largest builds in recent months. Total US DUCs rose by 193 to 7,943 in June. US energy companies last week cut oil rigs the most in a week since March as the rate of growth had slowed over the past month or so with recent declines in crude prices. Included with other optimistic forecast for US shale oil was the caveat that the DUC production figures are sketchy as current information is difficult for the EIA to obtain with little specific data being provided to Washington by E&Ps or midstream operators. Given all the publicity surrounding constraints on moving oil from the Permian to market, the EIA admits that it “may overestimate production due to constraints.”

The Middle East and North Africa

Iran

Iran’s supreme leader, Ayatollah Ali Khamenei, called on state bodies to support the government of president Hassan Rouhani in fighting US economic sanctions. The likely return of US economic sanctions has triggered a rapid fall of Iran’s currency and protests by bazaar traders usually loyal Islamist rulers, and a public outcry over alleged price gouging and profiteering. The speech to member of Rouhani’s cabinet is clearly aimed at the conservative elements in the government who have been critical of the President and his policies of cooperation with the West and a call for unity in a time that seems likely to be one of great economic hardship spread to more than 80 Iranian cities and towns. At least 25 people died in the unrest, the most significant expression of public corruption, but the protest took on a rare political dimension, with growing number of people calling on supreme leader Khamenei to step down. Although there is much debate over the effectiveness of the impending US sanctions, some analysts are saying that Iran’s oil exports could fall by as much as two-thirds by the end of the year putting oil markets under massive strain amid supply outages elsewhere in the world. Some of the worst-case scenarios are forecasting a drop to only 700,000 b/d with most of Tehran’s exports going to China, and smaller chares going to India, Turkey and other buyers with waivers. China, the biggest importer of Iranian oil at 650,000 b/d according to Reuters trade flow data, is likely to ignore US sanctions.

Iraq

Iraq’s future is again in trouble as protests erupt across the country. These protests began in southern Iraq after the government was accused of doing nothing to alleviate a deepening unemployment crisis, water and electricity shortages and rampant corruption. The demonstrations are spreading to major population centers including Najaf and Amirah, and now discontent is stirring in Baghdad. The government has been quick to promise more funding and investment in the development of chronically underdeveloped cities, but this has done little to quell public anger. Iraqis have heard these promises countless times before, and with a water and energy crisis striking in the middle of scorching summer heat, people are less inclined to believe what their government says. The civil unrest had begun to diminish in southern Iraq, leaving the country’s oil sector shaken but secure-though protesters have vowed to return. Operations at several oil fields have been affected as international oil companies and service companies have temporality withdrawn staff from some areas that saw protests. The government claims that the production and exporting oil has remained steady during the protests. With Iran refusing to provide for Iraq’s electricity needs, Baghdad has now also turned to Saudi Arabia to see if its southern Arab neighbor can help alleviate the crises it faces.

Saudi Arabia

The IPO has been touted for the past two years as the centerpiece of an ambitious economic reform program driven by crown prince Mohammed bin Salman to diversify the Saudi economy beyond oil. Saudi Arabia expects its crude exports to drop by roughly 100,000 b/d in August as the kingdom tries to ensure it does not push oil into the market beyond its customers’ needs.

Libya

Reopened its eastern oil ports and started to ramp up production from 650,000 to 700,000 and is expected to rise further after shipments resume at eastern ports that re-opened after a political standoff.

China

China’s economy expanded by 6.7 percent its slowest pace since 2016. The pace of annual expansion announced is still above the government’s target of “about 6.5 percent” growth for the year, but the slowdown comes as Beijing’s trade war with the US adds to headwinds from slowing domestic demand. The gross domestic product had grown at 6.8 percent in the previous three quarters. Higher oil prices play a role in the slowing of demand, but the main factor is higher taxes on independent Chinese refiners, which is already cutting into the refining margins and profits of the ‘teapots’ who have grown over the past three years to account fir around fifth of China’s total crude imports. Under the stricter tax regulations and reporting mechanisms effective 1 March, however, the teapots now can’t avoid paying a consumption tax on refined oil products sales- as they did in the past three years- and their refining operations are becoming less profitable.

Russia

Russia oil production rose by around 100,000 b/d from May. From July 1-15 the country’s average oil output was 11.215 million b/d an increase of 245,000 b/d from May’s production. Amid growing speculation that President Trump will attempt to weaken US sanctions on Russia’s oil sector, US congressional leaders are pushing legislation to strengthen sanctions on Russian export pipelines and joint ventures with Russian oil and natural gas companies. Ukraine and Russia said they would hold further European Union-mediated talks on supplying Europe with Russian gas, in a key first step towards renewing Ukraine’s gas transit contract that expires at the end of next year.

Venezuela

Venezuela’s Oil Minister Manuel Quevedo has been talking about plans to raise the country’s crude oil production in the second half of the year. However, no one else thinks or claims that Venezuela could soon reverse its steep production decline which has seen it losing more than 40,000 b/d of oil production every month for several months now. According to OPEC’s secondary sources in the latest Monthly Oil Market Report, Venezuela’s crude oil production dropped in June by 47,500 b/d from May, to average 1.340 million b/d in June. During a collapsing regime, widespread hunger, and medical shortages, President Nicolas Maduro continues to grant generous oil subsidies to Cuba. It is believed that Venezuela continues to supply Cuba with around 55,000 barrels of oil per day, costing the nation around $1.2 billion per year.

Alternatives to Oil

In its search for secure, sustainable and affordable supplies of energy, the world is turning its attention to unconventional energy resources. Shale gas is one of them. It has turned upside down the North-American gas markets and is making significant strides in other regions. The emergence of shale gas as a potentially major energy source can have serious strategic implications for geopolitics and the energy industry.

Uranium and Nuclear

The nuclear industry has a relatively short history: the first nuclear reactor was commissioned in 2945. Uranium is the main source of fuel for nuclear reactors. Worldwide output of uranium has recently been on the rise after a long period of declining production caused by uranium resources have grown by 12.5% since 2008 and they are sufficient for over 100 years of supply based on current requirements.

Total nuclear electricity production has been growing during the past two decades and reached an annual output of about 2,600TWh by mid-2000s, although the three major nuclear accidents have slowed down or even reversed its growth in some countries. The nuclear share of total global electricity production reached its peak of 17% by the late 1980s, but since then it has been falling and dropped to 13.5% in 2012. In absolute terms, the nuclear output remains broadly at the same level as before, but its relative share in power generation has decreased, mainly due to Fukushima nuclear accident.

Japan used to be one of the countries with high share of nuclear (30%) in its electricity mix and high production volumes. Today, Japan has only two of its 54 reactors in operation. The rising costs of nuclear installations and lengthy approval times required for new construction have had an impact on the nuclear industry. The slowdown has not been global, as new countries, primarily in the rapidly developing economies in the Middle East and Asia, are going ahead with their plans to establish a nuclear industry.

Hydro Power

Hydro power provides a significant amount of energy throughout the world and is present in more than 100 countries, contributing approximately 15% of the global electricity production. The top 5 largest markets for hydro power in terms of capacity are Brazil, Canada, China, Russia and the United States of America. China significantly exceeds the other, representing 24% of global installed capacity. In several other countries, hydro power accounts for over 50% of all electricity generation, including Iceland, Nepal and Mozambique for example. During 2012, an estimated 27-30GW of new hydro power and 2-3GW of pumped storage capacity was commissioned.

In many cases, the growth in hydro power was facilitated by the lavish renewable energy support policies and CO2 penalties. Over the past two decade the total global installed hydro power capacity has increased by 55%, while the actual generation by 21%. Since the last survey, the global installed hydro power capacity has increased by 8%, but the total electricity produced dropped by 14%, mainly due to water shortages.

Solar PV

Solar energy is the most abundant energy resource and it is available for use in its direct (solar radiation) and indirect (wind, biomass, hydro, ocean etc.) forms. About 60% of the total energy emitted by the sun reaches the Earth’s surface. Even if only 0.1% of this energy could be converted at an efficiency of 10%, it would be four times larger than the total world’s electricity generating capacity of about 5,000GW. The statistics about solar PV installations are patchy and inconsistent. The table below presents the values for 2011 but comparable values for 1993 are not available.

The use of solar energy is growing strongly around the world, in part due to the rapidly declining solar panel manufacturing costs. For instance, between 2008-2011 PV capacity has increased in the USA from 1,168MW to 5,171MW, and in Germany from 5,877MW to 25,039MW. The anticipated changes in national and regional legislation regarding support for renewables is likely to moderate this growth.

Conclusion

The rapid consumption of fossil fuels has contributed to environmental damage, the use of these fuels including oil releases chemicals that contribute to smog, acid rain, mercury contamination and carbon dioxide emissions from fossil fuel consumption are the main drivers of climate change, the effects of which are likely to become more and more severe as temperature rise. The depletion of oil and other fossil resources leaves less available to future generations and increases the likelihood of price spikes if demand outpaces supply.

One of the most intriguing conclusions from this idea is that this new “age of abundance” could alter behavior from oil producers. In the past some countries (notably OPEC members) restrained output husbanding resources for the future, betting that scarcity would increase the value of their holdings over time. However, if a peak in demand looms just over the horizon, oil producers could rush to maximize their production in order to get as much value for their reserves while they can. Saudi oil minister Sheikh Ahmed Zaki Yamani was famously quoted as saying, “the Stone Age didn’t end for lack of stone, and the oil age will end long before the world runs out of oil.” This quote reflects the view that the development of new technologies will lead to a shift away from oil consumption before oil resources are fully depleted. Nine of the ten recessions between 1946 and 2005 were preceded by spikes in oil prices and the latest recession followed the same pattern.

Extending the life of oil fields, let alone investing in new ones, will require large volumes of capital, but that might be met with skepticism from wary investors when demand begins to peak. It will be difficult to attract investment to a shrinking industry, particularly if margins continued to get squeezed. Peak demand should be an alarming prospect for OPEC, Russia and the other major oil producing countries. Basically, any and all oil producers who will find themselves fighting more aggressively for a shrinking market.

The precise data at which oil demand hits a high point and then enters into decline has been the subject of much debate, and a topic that has attracted a lot of interest just in the last few years. Consumption levels in some parts of the world have already begun to stagnate, and more and more automakers have begun to ratchet up their plans for electric vehicles. But the exact date the world will hit peak demand misses the whole point. The focus shouldn’t be on the date at which oil demand peaks, but rather the fact that the peak is coming. In other words, oil will be less important when it comes to fueling the global transportation system, which will have far-reaching consequences for oil producers and consumers alike. The implications of a looming peak in oil consumptions are massive. Without an economic transformation, or at least serious diversification, oil-producing nations that depend on oil revenues for both economic growth and to finance public spending, face an uncertain future.

2018-9-21-1537537682

Water purification and addition of nutrients as disaster relief: college application essay help

1. Introduction

1.1 Natural Disasters

Natural disasters are naturally occurring events that threaten human lives and causes damage to property. Examples of natural disasters include hurricanes, tsunamis, earthquakes, volcanic eruptions, typhoons, droughts, tropical cyclones and floods. (Pask, R., et al (2013)). They are inevitable and oftentimes, can cause calamitous implications such as water contamination and malnutrition, especially to developing countries like the Philippines, which is particularly prone to typhoons and earthquakes. (Figure 1)

Figure 1 The global distribution of natural disaster risk (The United Nations University World Risk Index 2014)

1.1.1 Impacts of Natural Disaster

The globe faces impacts of natural disasters on human lives and economy on an astronomical scale. According to a 2014 report by the United Nations, since 1994, 4.4 billion people have been affected by disasters, which claimed 1.3 million lives and cost US$2 trillion in economic losses. Developing countries are more likely to suffer a greater impact from natural disasters than developed countries as natural disasters affect the number of people living below the poverty line, and increase their numbers by more than 50 percent in some cases. Moreover, it is expected that by 2030, up to 325 million extremely poor people will live in the 49 most hazard-prone countries. (Child Fund International. (2013, June 2)) Hence, it necessitates the need for disaster relief to save the lives of those affected, especially those in developing countries such as the Philippines.

1.1.2 Lack of access to clean water

After a natural disaster strikes, severe implications such as water contamination occurs.

Besides, natural disasters know no national borders of socioeconomic status. (Malam, 2012) For example, Hurricane Katrina, which struck New Orleans, a developed city, destroyed 1,200 water systems, and 50% of existing treatment plants needed rebuilding afterwards. (Copeland, 2005) This led to the citizens of New Orleans having a shortage of drinking water. Furthermore, after the 7.0 magnitude earthquake that struck Haiti, a developing country, in 2012, there was no plumbing left underneath Port-Au-Prince, and many of the water tanks and toilets were destroyed. (Valcárcel, 2010) These are just some of the many scenarios of can bring about water scarcity.

The lack of preparedness to prevent the destruction caused by the natural disaster and the lack of readiness to respond claims to be the two major reasons for the catastrophic results of natural disasters. (Malam, 2012) Hence, the aftermath of destroyed water systems and a lack of water affect all geographical locations regardless of its socioeconomic status.

1.2 Disaster relief

Disaster relief organisations such as The American Red Cross help countries that are recovering from natural disasters by providing these countries with the basic necessities.

After a disaster, the Red Cross works with community partners to provide hot meals, snacks and water to shelters or from Red Cross emergency response vehicles in affected neighborhoods. (Disaster Relief Services | Disaster Assistance | Red Cross.)

The International Committee of the Red Cross/Red Crescent (ICRC) reported that its staff had set up mobile water treatment units. These were used to distribute water to around 28,000 people in towns along the southern and eastern coasts of the island of Samar, and to other badly-hit areas including Basey, Marabut and Guiuan. (Pardon Our Interruption. (n.d.))

Figure 2: Children seeking help after a disaster(Pardon Our Interruption. (n.d.))

Figure 3: Massive Coastal Destruction from Typhoon Haiyan (Pardon Our Interruption. (n.d.))

1.3 Target audience: Tacloban, Leyte, The Philippines

As seen in figures 4 and 5, Tacloban is the provincial capital of Leyte, a province in the Visayas region in the Philippines. It is the most populated region in the Eastern Visayas region, with a total population of 242,089 people as of August 2015. (Census of Population, 2015)

Figure 4: Location of Tacloban in the Philippines (Google Maps)

Figure 5: Location of Tacloban in the Eastern Visayas region (Google Maps)

Due to its location on the Pacific Ring of Fire (Figure 6), more than 20 typhoons (Lowe, 2016) occur in the Philippines each year.

Figure 6: The Philippines’ position on the Pacific Ring of Fire (Mindoro Resources Ltd., 2004)

In 2013, Tacloban was struck by Super Typhoon Haiyan, locally known as ‘Yolanda’. The Philippine Star, a local digital news organisation, reported more than 30,000 deaths from that disaster alone. (Avila, 2014) Tacloban is in shambles after Typhoon Haiyan and requires much aid to restore the affected area, especially when the death toll is a whopping five figure amount.

1.4 Existing measures and their gaps

Initially, there was a slow response of the government to the disaster. For the first three days after the typhoon hit, there was no running water and dead bodies were found in wells. In desperation for water to drink, some even smashed pipes of the Leyte Metropolitan Water District. However, even when drinking water was restored, it was contaminated with coliform. Many people thus became ill and one baby died of diarrhoea. (Dizon, 2014)

Long response-time by the government, (Gap 1) and further consequences were borne by the restoration of water brought (Gap 2). The productivity of people was affected and hence there is an urgent need for a better solution to the problem of late restoration of clean water.

1.5 Reasons for Choice of Topic

There is high severity since ingestion of contaminated water is the leading cause of infant mortality and illness in children (International Action, n.d.) and more than 50% of the population is undernourished. (World Food Programme, 2016). Much support and humanitarian aid has been given by organisations such as World Food Programme and The Water Project, yet more efforts are needed to lower the death rates, thus showing the persistency. It is also an urgent issue as malnourishment mostly leads to death and the children’s lives are threatened.

Furthermore, 8 out of 10 of the world’s cities most at risk to natural disasters are in the Philippines. (Reference to Figure _)Thus, the magnitude is huge as there is high frequency of natural disasters. While people are still recovering from the previous one, another hit them, thus worsening the already severe situation.

Figure _ Top 5 Countries of World Risk Index of Natural Disasters 2016 (Source: UN)

WWF CEO Jose Maria Lorenzo Tan said that “on-site desalination or purification” would be a cheaper and better solution to the lack of water than shipping in bottled water for a long period of time. (Dizon, 2014) Instead of relying on external humanitarian aid, which might incur a higher amount of debt as to relying on oneself for water, this can cushion the high expenses of rebuilding their country. Hence, there is a need for a water purification plant that provides potable water immediately when a natural disaster strikes. The plant will also have to provide cheap and affordable water until water systems are restored back to normal.

Living and growing up in Singapore, we have never experienced natural disasters first hand. We can only imagine the catastrophic destruction and suffering that accompanies natural disasters. With “Epione Solar Still” (named after the greek goddess of the Soothing of Pain), we hope to be able to help many Filipinos access clean and drinkable water, especially children who clearly do not deserve to experience such tragedy and suffering.

1.6 Case study: Disaster relief in Japan

Located at the Pacific Ring of Fire, Japan is vulnerable to natural disasters such as earthquakes, tsunami, volcanic eruptions, typhoons, floods and mudslides due to its geographical location and natural conditions. (Japan Times, 2016)

In 2011, an extremely high 9.0 magnitude earthquake hit Fukushima, causing a tsunami that destroyed the northeast coast and killed 19,000 people. It was the worst-hit earthquake in Japan in history, and it damaged the Fukushima plant and caused nuclear leakage, leading to contaminated water which currently exceeds 760,000 tonnes. (The Telegraph, 2016) The earthquake and tsunami caused a nuclear power plant to fail, and radiation to leak into the ocean and escape into the atmosphere. Many evacuees have still not returned to their homes, and, as of January 2014, the Fukushima nuclear plant still poses a threat, according to status reports by the International Atomic Energy Agency. (Natural Disasters & Pollution | Education – Seattle PI. (n.d.))

Disaster Relief

In the case of major disasters, the Japan International Cooperation Agency (JICA) deploys Japan Disaster Relief (JDR) teams, consisting of the rescue, medical, expert and infectious disease response teams and also the Self-Defence Force (SDF) to provide relief aid to affected countries. It provides emergency relief supplies such as blankets, tents and water purifiers and some are also stockpiled as reserved supplies in places closer to disastrous areas in case disasters strike there and emergency disaster relief is needed. (JICA)

For example during the Kumamoto earthquake in 2016, 1,600 soldiers had joined the relief and rescue efforts. Troops were delivering blankets and adult diapers to those in shelters. With water service cut off in some areas, residents were hauling water from local offices to their homes to flush toilets. (Japan hit by 7.3-magnitude earthquake | World news | The Guardian. (2016, April 16))

Solution to Fukushima water contamination

Facilities are used to treat contaminated water. The main one is the Multi-nuclide Removal Facility (ALPS) (Figure _), which could remove most radioactive materials except Tritium. (TEPCO, n.d)

Figure _: Structure of Multi-nuclide Removal Facility (ALPS) (TEPCO, n.d)

1.7 Impacts of Case Study

The treatment of contaminated water is very effective as more than 80% of contaminated water stored in tanks has been decontaminated and more than 90% of radioactive materials has been removed during the process of decontamination by April 2015. (METI, 2014)

1.8 Lessons Learnt

Destruction caused by natural disasters results in a lack of access to clean and drinkable water (L1)

Advancements in water purification technology can help provide potable water for the masses. (L2)

Natural disasters weaken immune systems, people are more vulnerable to the diseases (L3)

1.9 Source of inspiration

Suny Clean Water’s solar still, is made with cheap material alternatives, which would help to provide more affordable water for underprivileged countries.

A fibre-rich paper is coated with carbon black(a cheap powder left over after the incomplete combustion of oil or tar) and layered over each section of a block of polystyrene foam which is cut into 25 equal sections. The foam floats on the untreated water, acting as an insulating barrier to prevent sunlight from heating up too much of the water below. Then, the paper wicks water upward, wetting the entire top surface of each section. This causes a clear acrylic housing to sit atop the styrofoam. (Figure _)

Figure _: How fibre-rich paper coated with carbon black is adapted into the solar still. (Sunlight-powered purifier could clean water for the impoverished | Science | AAAS. (2017, February 2)

It is estimated that the materials needed to build it cost roughly $1.60 per square meter, compared with $200 per square meter for commercially available systems that rely on expensive lenses to concentrate the sun’s rays to expedite evaporation.

1.10 Application of Lessons Learnt

Gaps in current measures

Learning points

Applications to project

Key features in proposal

Developing countries lack the technology / resources to treat their water and provide basic necessities to their people.

Advanced technology can provide potable water readily. (L2)

Need for technology to purify contaminated water.

Solar Distillation Plant

Even with purification of water, problem of malnutrition which is worsened by natural disasters, is still unsolved.

Solution to provide vitamins to young children to boost immunity and lower vulnerability to diseases and illnesses. (L3)

Need for nutrient-rich water.

Nutrients infused into water using concept of osmosis.

Even with the help of external organisations, less than 50% of households have access to safe water.

Clean water is still inaccessible to some people. (L1)

Increase accessibility to water.

Evaporate seawater (abundant around Phillipines) in solar still. (short-term solution)

Figure _: Table of application of lessons learnt

2. Project Aim and Objectives

2.1 Aim

Taking into account the loopholes that exist in current measures adopted to improve water purification to reduce water pollution and malnutrition in Ilocos Norte, our project proposes a solution to provide Filipinos with clean water by creating an ingenious product, the Epione Solar Still. The product makes use of natural occurrences (evaporation of water), and adapts and incorporates the technology and mechanism behind the kidney dialysis machine to provide Filipinos with nutrient-enriched water without polluting their environment. The product will be located near water bodies where seawater is abundant to act as a source of clean water to the Filipinos.

2.2 Objectives of Project

To operationalise our aim, our objectives are to:

Design “Epione Solar Still”

Conduct interviews with:

Masoud Arfand, from Department of Mechanical Engineering, Najafabad Branch, Islamic Azad University to determine the projected percentage of water that Epione Solar Still can produce and the number of people it can provide for.

Qiaoqiang Gan, electrical engineer from Sunny Clean Water (his team innovated the technology of using fibre-rich paper is coated with carbon black to make process of water purification using the soalr still faster and more cost-friendly) to determine amount of time Epione Solar Still needed to produce sufficient water needed to support Fillipinos in Tacloban, Leyte as Epione Solar Still is a short-term disaster relief solution.

Dr Nathan Feldman, Co-Founder of HopeGel, of EB Performance, LLC to determine significant impact of nutrients-infused water to boost immunity of victims of natural disaster. (Project Medishare, n.d)

Review the mechanism and efficiency of using a solar still to source clean and nutrient-rich water for Filipinos.

3. Project Proposal

Investment into purification of water contamination in the form of disaster relief, which can provide Filipinos with nutrients to boost their immunity in times of disaster and limit the number of deaths that occur due to the consumption of contaminated water during a crisis.

3.1 Overview of Project

Our group proposes to build a solar distillation plant (Figure _) within a safe semi-underground bunker. The bunker will contain a generator to power certain parts of the plant. Then, seawater will be fed into the still via underground pipes from the sea surrounding the southern part of Tacloban. The purified water produced by the distillation process will be infused with nutrients to boost the immunity of disaster victims once consumed. Hence, not only will our distillation plant be able to produce potable water, it will also be nutritious so as to boost victims’ immunity in times of natural calamities. Potable water will then be distributed in drums and shared among Filipinos using .

Figure _: Mechanism of our solar distillation plant, Epione Solar Still

3.2 Phase 1: Water Purification System

3.2.1 Water extraction from the sea

Still is located near the sea where seawater is abundant. Seawater is extracted from low-flow open sea (Figure _) and then pumped into our solar still.

Figure _: Intake structure of seawater (Seven Seas Water Corporation, n.d.)

3.2.2 Purification of Seawater

Solar energy heats up the water in the solar still. The water evaporates, and condenses on the cooler glass surface of the ceiling of the still. Pure droplets of water slide down the glass and into the collecting basin, where nutrients will diffuse into the water.

Figure 6: Mechanism of Epione Solar Still

3.3 Phase 2: Nutrient Infuser

Using the concept of reverse osmosis (Figure _), a semi permeable membrane separates the nutrients and newly purified water, allowing the vitamins and minerals to diffuse into the condensed water. The nutrient-infused water will be able to provide nourishment, thus making the victims of natural disaster less vulnerable and susceptible to illnesses and diseases due to a stronger immune system. This will help the Filipinos in Tacloban, Leyte quickly get back on their feet after a natural disaster and minimise the death toll as much as possible after a natural disaster befalls.

Figure _: How does reverse osmosis work (Water Filter System Guide, n.d.)

Nutrient / Mineral

Function

Upper Tolerable Limit (The highest amount that can be consumed without health risks)

Vitamin A

Helps to form and maintain healthy teeth, bones, soft tissue, mucus membranes and skin.

10,000 IU/day

Vitamin B3 (Niacin)

Helps maintain healthy skin and nerves

Has cholesterol-lowering effects

35 mg/day

Vitamin C

(Ascorbic acid, an antioxidant)

Promotes healthy teeth and gums.

Helps the body absorb iron and maintain healthy tissue.

Promotes wound healing.

2,000 mg/day

Vitamin D

(Also known as “sunshine vitamin”, made by the body after being in the sun).

Helps body absorb calcium.

Helps maintain proper blood levels of calcium and phosphorus

1,000 micrograms/day (4,000 IU)

Vitamin E

(Also known as tocopherol, an antioxidant)

Plays a role in formation of red blood cells.

1,500 IU/day

Figure _: Table of functions and amount of nutrients that will be diffused into our Epione water. (WebMD, LLC, 2016)

3.4 Phase 3: Distribution of water to households in Tacloban, Leyte

Potable water will be collected into drums (Figure _) of 100 litres in capacity each, which would suffice 50 people since the average intake of water is 2 litres per person per day. These drums will then be distributed to the tent cities in Tacloban, Leyte, our targeted area, should a natural disaster befall. Thus, locals will get potable water within their reach, which is extremely crucial for their survival in times of natural calamities.

Figure _: Rain barrels will be used to store the purified and nutrient-infused water (Your Easy Garden, n.d.)

3.5 Stakeholders

3.5.1 The HopeGel Project

HopeGel is a nutrient and calorie-dense protein gel designed to aid children suffering from malnutrition caused by severe food insecurity brought upon by draughts (Glenroy Inc., 2014). HopeGel has been distributed in Haiti where malnutrition is the number one cause of death among children under five mainly due to the high frequency of natural disasters that has caused much destruction to the now impoverished state of Haiti. (Figure _) The implementation of Epione Solar Still by this company helps it achieve its objective to address the global issue of severe acute malnutrition in children as most victims of natural disasters lack the nourishment they need (HopeGel, n.d.)

Figure _: HopeGel, a packaged nutrient and calorie-dense protein gel (Butschli, HopeGel, n.d.)

3.5.2 Action Against Hunger (AAH)

Action Against Hunger is a relief organisation that develops and carries out programme for countries in need regarding nutrition, health, water and food security (Action Against Hunger, n.d) (Figure _). AAH also provides programs to be better prepared for disasters which aims to anticipate and prevent humanitarian crisis (GlobalCorps, n.d.) With 40 years of expertise, helping 14.9 million people across more than 45 countries, AAH is no stranger to humanitarian crises. The implementation of Epione Solar Still by this company helps it achieve its aim of saving lives by extending help to Fillipinos in Tacloban, Leyte suffering from deprivation of a basic need due to water contamination caused by disaster relief through purifying and infusing nutrients into seawater.

Figure _: Aims and Missions of Action Against Hunger (AACH, n.d.)

2017-7-11-1499736147

Analyse the use of ICTS in a humanitarian emergency

INTRODUCTION

The intention of writing this essay is to analyse the use of ICTS in a humanitarian emergency. The specific case study we have discuss in this essay is Multi-level functionality of social media in the aftermath of the Great East Japan Earthquake written by Jung, J., and Moro, M. 2014. This report emphasis on the benefits of social media networks like twitter and face book can be used to spread and gather important information in emergency situations rather than solely utilised as a social network platform. ICTs has changed the way humans gather information during the disasters and social media specially twitter became important source of information in these disasters.

Literature Review

The case studies of using ICTs in a humanitarian emergency can have either technically rational perspective or socially embedded perspective. Technically rational perspective means what to do and how to achieve the given purpose, it is a prescription for design and action. Socially embedded means it focuses on the particular case and process of work is affected by the culture, area and human nature. In this article, we have examined different humanitarian disasters cases in which ICTS played a vital role to see if author consider technically rational perspective or socially embedded perspective.

In the article “Learning from crisis: Lessons in human and information infrastructure from the World Trade Centre response” by (Dawes, Cresswell et al. 2004) author adopts technical/rational perspective. 9/11 was very big incident and no one was ready to deal with this size of attack but as soon as it happened procedure start changing rapidly. Government, NGO and disaster response unit start learning and made new prescription, which can be used universally and in any size of disaster. For example, the main communication structure was damaged which was supplied by Verizon there were different communication suppliers suppling their services but they all were using the physical infrastructure supplied by Verizon. So VOIP was used for communication between government officials and in EOC building. There were three main areas where the problems were found and then new procedure adopt in the response of disaster. The three main areas were technology, information and inter layered relationships between the Ngo’s, Government and the private sector. (Dawes, Cresswell et al. 2004).

In the article “Challenges in humanitarian information management and exchange: Evidence from Haiti,” (Altay, Labonte 2014) author adopts socially embedded perspective. Haiti earthquake was one of the big disaster killing 500000 people and displacing at least 2 million. Around 2000 organisation went in for help but there was no coordination between NGO`s and government for the humanitarian response. Organisation didn’t consider local knowledge they assumed that there is no data available. All the organisations had different standards and ways to do work so no one followed any prescription. Technical aspect of HIME (humanitarian information management and exchange) wasn’t working because all the members of humanitarian relief work wasn’t sharing any humanitarian information. (Altay, Labonte 2014)

In the article, Information systems innovation in the humanitarian sector,” Information Technologies and International Development” (Tusiime, Byrne 2011) author adopts socially embedded perspective. Local staff was hired. They didn’t have any former experience or knowledge to work with such a technology, which slow down the process of implementing new technology. Staff wanted to learn and use new system but the changes were done on such a high pace that made staff overworked and stress, which made them loose the interest in the innovation. The management decided to use COMPAS as a new system without realizing that it’s not completing functional and it still have lots of issues but they still went ahead with it. When staff start using and found the problems and not enough technical support was supplied then they didn’t have any choice and they went back to old way of doing things (Tusiime, Byrne 2011). The whole process was effected by how the work is done in specific area and people behaviours.

In the article “Multi-level functionality of social media in the aftermath of the Great East Japan Earthquake,” (Jung, Moro 2014) author adopts technically rational perspective. In any future humanitarian disaster situation, social media can be used as an effective source of communication method conjunction with mass media. After the disaster twitter was used more as a spreading and gathering information source instead of using as social media platform.

In the article “Information flow impediments in disaster relief supply chains,” Journal of the Association for Information Systems,10(8), pp. 637-660.(Day, Junglas et al. 2009) author proposed development of IS for information sharing based on hurricane Katrina. Author adopted TR perspective because need of IS development for information flow within and outside of organisation is essential. This developed IS will help to manage complex supply chain management. Supply chain management in disaster situation is challenging as compare to traditional supply chain management. Supply chain management IS should be able to cater all types of dynamic information, suggested Day, Juglas and Silva (2009).

Case study Description:

On the 11 march 2011 at the scale of 9.00 magnitude hit north-eastern part of japan. This was followed by tsunami. Thousands of people lost their lives and infrastructure was completely damaged in that area (Jung, Moro 2014). Tsunami wiped off two towns of the maps and the costal maps had to be redrawn (Acar, Muraki 2011). On the same day of earth quake cooling system in nuclear reactor no 1 in Fukushima failed because of that nuclear accident Japanese government issued nuclear emergency. On the evening of the earthquake Japanese government issued evacuation order for 3 km area around reactor (Jung, Moro 2014). On March 12 hydrogen explosion occurred in the nuclear reactor because of failed cooling system which is followed by another explosion after 2 days on March 14. The area of evacuation was 3 km in the start but was increased to 20 km so avoid any nuclear radiation. This was one of the big nuclear disaster for the country so it was hard for the government to access the scale of the disaster. As the government officials, didn’t came across this kind situation before and couldn’t estimate the damage occurred because of incident. Government officials were adding more confusion in people with their unreliable information. They declare the accident level as 5 on the international nuclear scale but later they changed it to 7 which was highest on international nuclear scale. Media reporting was also confusing the public. The combination of contradicting information from government and media increase the level of confusion in the public. In the case of disaster Mass media is always the main source of information normally they discontinue their normal transmission and focus on the disaster. Their most of the airtime is devoted for the disaster so they can keep the people update about the situation. Normally mass media provides very reliable information in humanitarian disaster situation but in the case of japan disaster media was contradicting each other news e.g. international media was contradicting the news from local media as well as local government so people start losing faith in the mass media and start relying on different source to get information. Second reason was that the mass media was traditional way of gathering information and because of changes in technology people start using mobile phone and internet. Third main reason people start looking to get the information from different mean because the infrastructure for mass media was damage and lot of people cannot access the services of Television, so they start depending on video streaming sites e.g. ustream and YouTube. People start using twitter on big scale to spread and gather news. There was 30 percent of users increased on twitter within first week of disaster and 60 percent of twitter user thinks that it was useful for gather or spread information.

Case Study Analysis:

Twitter is one of the social media platform and micro blogging website, you can have 140 character in one tweet. It is different from other social media plate form because any one can follow you and they don’t need your authorization. Only register member can tweet but to read a message registration is not required. The author of “Multi-level functionality of social media in the aftermath of the Great East Japan Earthquake,” (Jung, Moro 2014) discuss about the five functionalities of twitter by the help of conceptual model of multi-level social media. The following figure describes the five primary function model in twitter very clearly.

Fig No 1 Source: (Jung, Moro 2014)

The five functionality was derived on survey and review of selected twitter timelines.

The first function was having tweets between individual it’s also known as interpersonal communication with others. It is micro level of conceptual model, in this level people from country and outside of a country were connecting other people who were is the affected area. The most of tweets were for checking safety of people that they are safe after the disaster, to inform your love ones if you were at affected area and needs any help or to inform people that you are safe. In the first three days high percentage of tweets were from micro level communication channel.

The second function was having communication channel for local organisation, local government and local media. It is meso level of conceptual model in this communication channel local governments open new accounts and re activate accounts which wasn’t used for a while to keep their local residents informed, the follower of twitter accounts increased very fast. People have understand the importance of social media and benefits of it after the disaster when the infrastructure was damaged and they were having electricity cut out but they were still able to get the information about the disasters and tsunami warnings. Local government and local media used twitter accounts to give different alerts and news e.g. the alert of tsunami was issued on twitter and after tsunami the reports of damage was released on twitter. Local media open new twitters channels and kept people informed about situation. Different organisation e.g. embassies of different countries used twitter to keep their nationals informed about situation about disaster and this was best way of communication between embassies and their nationals. Nationals can even let their embassy that they are struck in affected area and they need any help because they can be in very vulnerable situation as they are not in their country.

The third function was having communication of Mass media which is known as Macro level. Mass media used social platform to broadcast their news because the infrastructure was damage and people in effected area couldn’t access their broadcast. There were some people who were not in the country so they couldn’t access the local mass media news on television so they watching news on video streaming website as the demand increased most of mass media open the accounts on social media to fulfil the requirements. They start broadcasting their news on video streaming websites like YouTube, Ustream. Mass media was giving news updates several times a day on twitter as well and lot of people who were reading it also was retweeting them so information was spreading on very high speed.

The fourth function was information sharing and gathering which is known as cross level. Individual used social media to get the information about earthquake, tsunami and nuclear accident. When someone try to find information they come across the tweets which were for micro level, meso level and macro level. This level is great use when you are looking for help and you want to know different people opinions if they were in that situation what would they have done. The research done on the twitter time line proofs that on the day of earthquake people were tweeting regarding the shelters available and information about transport (Jung, Moro 2014).

The fifth function was direct channels between individuals and the mass media, government and the public. This is also consider as cross level. In this level individual could inform government and mass media about the situation of effected area because of disaster there were some places where government and mass media couldn’t reach, so they didn’t know the situation. Mayor of Minami-soma city which was 25 miles away from Fukushima used you tube to tell the government the threat of radiation to his city, the video went viral and Japanese government have international pressure to evacuate the city. (Jung, Moro 2014)

Reflection:

There was gradually change in use of social media to use a communication tool instead of social media platform in event of disaster. The multi-level functionality is one of the important characteristic which connects it very well with existing media. This is complete prescription which can be used in and after any kind of disaster. Social media can be used with other media as an effective communication methods to prepare for emergency in any future disaster situation.

Twitter played a big role in the communication in the disaster in japan. It was used to spread information, gather information about earthquake, tsunami and nuclear reactor accident. It was used to help request, issue warning about earthquake, tsunami and nuclear reactor accident. It was also used for condolences. Twitter has lot of benefits but it has some drawbacks which has to be rectify. The biggest issue in tweets are unreliability, anyone can tweet any information and there is no check and balance on it, only the person who do that tweet is responsible for the authentic information. There is no control on false information and it spreads so fast that it can create anxiety in people because of contradicted information e.g. if the false information about the range of radiation was released by individual and retweets by other individual who didn’t had any knowledge about the effect of radiation and nuclear accident it would had caused a panic in people. In the case of disaster, it is very important that reliable and right information is released.

Information system can play vital role in humanitarian disasters in all aspects. It can be used in the better communication, it can be used to improve the efficiency and accountability of the organisation. The data will be available widely in the organisation so it can have monitoring on the finances. It helps to coordinate different operation in organisations e.g. transport, supply chain management, logistics, finance and monitoring.

Social media has played a significant role in communicating, disseminating and storing data related to disasters. There is a need of control of that information being spread over the social media since not all type of information is authentic or verified.

IS based tools needs to be developed for disaster management in order to get best result from varied range of data extracted from social media and take necessary action for the wellbeing of people in disaster area.

The outcome of using purpose built IS, will be supportive in making decisions to develop strategy to deal with the situation. Disaster management team will be able to analyse the data in order to train the team for a disaster situation.

2017-1-12-1484253744

Renewable energy in the UK: essay help

The 2014 IPCC report stated that anthropogenic emissions of greenhouse gases have led to unprecedented levels of carbon dioxide, methane and nitrous oxide in the environment. The report also stated that the effect of greenhouse gases is extremely likely to have caused the global warming we have witnessed since the 20th century.

The 2018 IPCC report set new targets, aiming to limit climate change to a maximum of 1.5°C. To reach this, we will need zero CO₂ emissions by the year 2050. Previous IPCC targets of 2°C change allowed us until roughly 2070 to reach zero emissions. This means government policies will have to be reassessed and current progress reviewed in order to confirm whether or not the UK is capable of reaching zero emissions by 2050 on our current plan.

Electricity Generation

Fossil fuels are natural fuels formed from the remains of prehistoric plant and animal life. Fossil fuels (coal, oil and gas) are crucial in any look at climate change as when burned they release both carbon dioxide (a greenhouse gas) and energy. Hence, in order to reach the IPCC targets the UK needs to drastically reduce its usage of fossil fuels, either through improving efficiency or by using other methods of energy generation.

Whilst coal is a cheap energy source used to generate approximately 40% of the world’s electricity , it’s arguably the most damaging to the environment as coal releases more energy into the atmosphere in relation to energy produced than any other fuel source. Coal power stations generate electricity by burning coal in a combustion chamber and using the heat energy to transform water to steam which turns the propeller-like blades within the turbine. A generator (consisting of tightly-wound metal coils) is mounted at one end of the turbine and when rotated at a high velocity through a magnetic field, generates electricity. However the UK has made great claims to fully eradiate the use of coal in electricity generation by 2025. These claims are well substantiated by the UK’s rapid decline in coal use. In 2015 coal accounted for 22% of electricity generated in the UK, this was down to only 2% by the second quarter of 2017 and in April 2018 the UK even managed to go 72 hours powered without coal.

Natural gas became a staple of British electrical generation in the 1990s, when the Conservative Party got into power and privatised the electrical supply industry. The “Dash for gas” was triggered by legal changes within the UK and EU allowing for greater freedom to use gas in electricity generation.

Whilst natural gas emits less CO₂ than coal, it emits far more methane. Methane doesn’t remain in the atmosphere as long but it traps heat to a far greater extent. According to the World Energy Council methane emissions trap 25 times more heat than CO₂ over a 100 year timeframe.

Natural gas produces electrical energy in a gas turbine. Natural gas is mixed with the hot air and burned in a combustor. The hot gas then pushes turbine blades and as in coal plant, the turbine is attached to a generator, creating electricity. Gas turbines are hugely popular as they are a cheap source of energy generation and they can quickly be powered up to respond to surges in electrical demand.

Combined Cycle Gas Turbines (CCGT) are an even better source of electrical generation. Whilst traditional gas turbines are cheap and fast-reacting, they only have an efficiency of approximately 30%. Combined cycle turbines, however, are gas turbines used in combination with steam turbines giving an efficiency of between 50 and 60%. The hot exhaust from the gas turbine is used to create steam which rotates turbine blades and a generator in a steam turbine. This allows for greater thermal efficiency.

Nuclear energy is a potential way forward as no CO₂ is emitted by Nuclear power plants. Nuclear plants aim to capture the energy released by atoms undergoing nuclear fission. In nuclear fission, nuclei absorb neutrons as they collide thus making an unstable nucleus. The unstable nucleus will then split into fission products of smaller mass and emit two or three high speed neutrons which can then collide with more nuclei, making them unstable thus creating a chain reaction. The heat energy produced by splitting the atom is first converted can be used to produce steam which will be used by a turbine generator to produce electricity.

Currently, 21% of electricity generated in the UK comes from nuclear energy. In the 1990s, 25% of electricity came from nuclear energy but gradually old plants have been retired. By 2025, UK nuclear power could half. This is due to a multitude of reasons. Firstly, nuclear fuel is expensive in comparison to gas and coal. Secondly, nuclear waste is extremely radioactive and so must be dealt with properly. Also, in light of tragedies such as Chernobyl and Fukushima, much of the British public expressed concerns surrounding Nuclear energy with the Scottish government refusing to open more plants

In order to lower our CO₂ emissions it is crucial we also utilise renewable energy. The UK currently gets very little of its energy from renewable sources but almost all future plans place a huge emphasis on renewables.

The UK has great wind energy potential as the nation is the windiest country in the EU with 40% of the total wind that blows across the EU.

Wind turbines are straightforward machinery; the wind turns the turbine blades around a rotor which is connected to the main shaft which spins a generator, creating electricity. In 2017, onshore wind generated enough energy to power 7.25 million homes a year and generated 9% of the UK’s electricity. However, despite the clear benefits of clean, renewable energy, wind energy is not without its problems. Firstly, it is an intermittent supply – the turbine will not generate energy when there is no wind. Also it has been opposed by members of the public for affecting the look of the countryside and bird fatalities. These problems are magnified by the current conservative government’s stance on wind energy who wish to limit onshore wind farm development despite public opposition to this “ban”.

Heating and Transport

Currently it is estimated a third of carbon dioxide (CO2) emissions in the UK are accounted for in the heating sector. 50% of all heat emissions in the UK exist for domestic use, consequently making it the main source of CO2 emissions in the heating sector. Around 98% of domestic heating is used for space and water heating. The government has sought to reduce the emissions from domestic heating alone by issuing a series of regulations on new boilers. Regulations state as of 1st April 2005 all new installations and replacements of boilers are required to be condensing boilers. As well as CO2 emissions being much lower, condensing boilers are around 15-30% more efficient than older gas boilers. Reducing heat demand has also been an approach taken to reduce emissions. For instance, building standards in the UK have set higher levels of required thermal insulations of both domestic and non-domestic buildings when refurbishing and carrying out new projects. These policies are key to ensure that both homes are buildings in industry are as efficient as possible when it comes to conserving heat.

Although progress is being made in terms of improving current CO2 reducing systems, the potential for significant CO2 reductions rely upon low carbon technologies. Highly efficient technologies such as the residential heat pump and biomass boilers have the potential to be carbon neutral sources of heat and in doing so could massively reduce CO2 emissions for domestic use . However, finding the best route to a decarbonised future in the heating industry relies upon more than just which technology has the lowest carbon footprint. For instance, intermittent technologies such as solar thermal collectors cannot provide a sufficient level of heat in the winter and require a back-up source of heat making them a less desirable source of heat . Cost is also a major factor in consumer preference. For most consumers, a boiler is the cheapest option for heating. This provides a problem for low carbon technologies which tend to have significantly higher upfront costs . In response to the cost associated with these technologies, the government has introduced policies such as the ‘Renewable Heat Incentive’ which aims to alleviate the expense through paying consumers for each unit of heat produced by low carbon technologies. Around 30% of the heating sector is allocated for industry use, making it the second largest cause of CO2 in this sector . Currently, combined heat and power (CHP) is the main process used to make industrial heat use more efficient and has shown CO2 reductions of up to 30%. Although this is a substantial reduction in CO2, alternative technology has the potential to deliver even higher reductions. For example, the process of carbon capture storage (CCS), has the potential to reduce CO2 emissions by up to 90% . However, CCS is a complex procedure which would require a substantial amount of funding and as a result is not currently implemented for industrial use in the UK.

Although heating is a significant contribution to CO2 emissions in the UK, there is also much needed progress elsewhere. In 2017 it was estimated that 34% of all carbon dioxide (CO2) emissions in the UK were caused by transport and is widely thought to be the sector in which least progress is being made, with only seeing a 2% reduction in CO2 emissions since 1990. Road transport contributes the highest proportion of emissions, more specifically petrol and diesel cars. Despite average CO2 emissions of new vehicles declining, the carbon footprint of the transport industry continues to increase due to the larger number of vehicles in the UK.

In terms of progress, CO2 emissions of new cars in 2017 were estimated to be 33.1% lower than the early 2000s. Although efficiencies are improving, more must be done if we are to conform to the targets set from the Climate Change Act 2008. A combination of decarbonising transport and implementing government legislation is vital to have the potential to meet these demands. New technology such as battery electric vehicles (BEV’s) have the potential to create significant reductions in the transport industry. As a result, a report from the committee of climate change suggests that 60% of all sales of new cars and vans should be ultra-low emission by 2030. However, the likeliness of achieving this is hindered by the constraints of new technologies. For instance, low emission vehicles are likely to have significantly higher costs and lack consumer awareness. This reinforces the need of government support in projecting new technologies and cleaner fuels. To support the development and uptake of low carbon vehicles the government has committed £32 million for the funding of charging infrastructure of BEV’s from 2015-2020 and a further £140 million has been allocated to the ‘low carbon vehicle innovation platform’ which strives to advance the development and research of low emission vehicles. Progress has also been made to make these vehicles more cost competitive through being exempt from taxes such as Vehicle Excise Duty and providing incentives such as plug in grants of up to £3,500. Aside from passenger cars, improvements are also being made to emissions of public transport. The average low emission bus in London could reduce its CO2 emissions by up to 26 tonnes per year subsequently acquiring the governments support in England of the ‘Green Bus Fund’.

Conclusion

In 2017, renewables accounted for a record 29.3% of the UK’s energy generation. This is a vast improvement on previous years and suggests the UK is on track to meet the new IPCC targets although a lot of work still needs to be done. Government policies do need to be reassessed in light of the new targets however. Scotland should reassess its nuclear policy as this might be a necessary stepping stone in reduced emissions until renewables are able to fully power the nation and the UK government needs to reassess its allocation of funding as investment in clean energy is on a current downward trajectory.

Although progress has been made to reduce CO2 emissions in the heat and transport sector, emissions throughout the UK remain much higher than desired. The committee of climate change report to parliament (2015), calls for the widespread electrification of heating and transport by 2030 to help prevent a 1.5 degree rise in global temperature. This is likely to pose as a major challenge and will require a significant increase in electricity generation capacities in conjunction with greater policy intervention to encourage the uptake of low carbon technologies. Although the likelihood of all consumers switching to alternative technologies are sparse, if the government continues to tighten regulations surrounding fossil fuelled technologies whilst the heat and transport industry continue to develop old and new systems to become more efficient this should see significant CO2 reductions in the future.

2018-11-19-1542623986

Is Nuclear Power a viable source of energy?: college application essay help

6th Form Economics project:

Nuclear power, the energy of the future of the 1950s, is now starting to feel like the past. Around 450 nuclear reactors worldwide currently generate 11% of the world electricity, or approximately 2500 TWh in a year, just under the total nuclear power generated globally in 2001 and only 500 TWh more than in 1991. The number of operating reactors worldwide has seen the same stagnation, with an increase of only 31 since 1989, or an annual growth of only 0.23% compared to 12.9% from 1959 to 1989. Most reactors, especially in Europe and North America, where built before the 90s and the average age of reactors worldwide is just over 28 years. Large scale nuclear accidents such as Chernobyl in 1986 or, much more recently, Fukushima in 2011 have negatively impacted public support for nuclear power and helped cause this decline, but the weight of evidence has increasingly suggested that nuclear is safer than most other energy sources and has an incredibly low carbon footprint, causing the argument against nuclear to shift from concerns about safety and the environment to questions about the economic viability of nuclear power. The crucial question that remains is therefore about how well nuclear power can compete against renewables to produce the low carbon energy required to tackle global warming.

The costs of most renewable energy sources have been falling rapidly and increasingly able to outcompete nuclear power as a low carbon option and even fossil fuels in some places; photovoltaic panels, for example, have halved in price from 2008 to 2014. Worse still for nuclear power, it seems that while costs of renewable energy have been falling, plans for new nuclear plants have been plagued with delays and additional costs: in the UK, Hinkley Point C power station is set to cost £20.3bn, making it the world’s most expensive power station, and significant issues in the design have raised questions as to whether the plant will be completed by 2025, it’s current goal. In France, the Flamanville 3 reactor is now predicted to cost three times its original budget and several delays have pushed the start up date, originally set for 2012, to 2020. The story is the same in the US, where delays and extra costs have plagued the construction of the Vogtle 3 and 4 reactors which are now due to be complete by 2020-21, 4 years over their original target. Nuclear power seemingly cannot deliver the cheap, carbon free energy it promised and is being outperformed by renewable energy sources such as solar and wind.

The crucial and recurring issue with nuclear power is that it requires huge upfront costs, especially when plants are built individually, and can only provide revenue years after the start of construction. This means that investment into nuclear is risky, long term and cannot be done well on a small scale, though new technologies such as SMRs (Small Modular Reactors) may change this in the coming decades, making it a much bigger gamble. Improvements in other technologies over the period of time a nuclear plant is built means that is often better for private firms, who are less likely to be able to afford large scale programs enabling significant cost reductions or a lower debt to equity ration in their capital structure, to invest in more easily scalable and shorter term energy sources, especially with subsidies favouring renewables in many developed countries. All of this points to the fundamental flaw of nuclear: that it requires going all the way. Small scale nuclear programs that are funded mostly with debt, that have high discount rates and low capacity factors as they are switched off frequently will invariably have a very high Levelised Cost of Energy (LCOE) as nuclear is so capital intensive.

That said, the reverse is true as well. Nuclear plants have very low operating costs, almost no external costs and the cost of decommissioning a plant are only a small portion of the initial capital cost, even with a low discount rate such as 3%, due to the long lifespan of a nuclear plant and the fact that many can be extended. Operating costs include fuel costs, which are extremely low for nuclear, costing only 0.0049 USD per kWh, and non-fuel operation and maintenance costs which are barely higher at 0.0137 USD per kWh. This includes waste disposal, a frequently cited political issue that has no longer been relevant technically for decades as waste can be reused relatively well and stored on site safely at very low costs simply because the quantity of fuel used and therefore waste produced is so small. The fuel, uranium is abundant and technology enabling uranium to be extracted from sea water would give access to a 60,000 year supply at present rates of consumption so costs from ‘resource depletion’ are also small. Finally, external costs represent a very small proportion of running costs: the highest estimates for health costs and potential accident are at 5€/MWh and 4€/MWh respectively, though some estimates fall to only 0.3€/MWh for potential accidents when past records are adjusted to try and factor in improvements in safety standards; though these vary significantly due to the fact that the total number of reactors is very small.

Nuclear power therefore remains still one of the cheapest ways to produce electricity in the right circumstances and many LCOE (Levelised Cost of Energy) estimates, which are designed to factor in all costs over the life time of a unit to give a more accurate representation of the costs of different types of energy, though they usually omit system costs, point to nuclear as a cheaper energy source than almost all renewables and most fossil fuels at low discount rates.

LCOE costs taken from ‘Projected Costs of Generating Electricity 2015 Edition’ and system costs taken from ‘Nuclear Energy and Renewables (NEA, 2012)’ have been combined by the World Nuclear association to give LCOE for four countries to compare the costs of nuclear to other energy sources. A discount rate of 7% is used, the study applies a $30/t CO2 price on fossil fuel use and uses 2013 US$ values and exchange rates. It is important to bear in mind that LCOE estimates vary widely as many assume different circumstances and they are very difficult to calculate, but it is clear from the graph that nuclear power is more than still viable; being the cheapest source in three of the four countries and third cheapest in the fourth behind onshore wind and gas.

2019-5-13-1557759917

Decision making during the Fukushima disaster

Introduction

On March 11, 2011 a tsunami struck the east coast of Japan, which resulted in a disaster at the Fukushima Daiichi nuclear power plant. During the day commencing the natural disaster many decisions were made with regards to managing the crisis. This paper will examine these decisions made during the crisis. The Governmental Politics Model, a model designed by Allison and Zelikow (1999), will be adopted to analyse the events. Therefore, the research question of this paper is: To what extent does the Governmental Politics Model explain the decisions made during the Fukushima disaster.

First, this paper will lay the theoretical basis for an analysis. The Governmental Politics Model and all crucial concepts within it are discussed. Then a conscription of the Fukushima case will follow. Since the reader is expected to already have general knowledge regarding the Fukushima Nuclear disaster the case description will be very brief. With the theoretical framework and case study a basis for the analysis is laid. The analysis will look into the decisions government and Tokyo Electric Power Company (TEPCO) officials made during the crisis.

Theory

Allison and Zelikow designed three theories to understand the outcomes of bureaucracies and decision making in the aftermath of the Cuban Missile Crisis in 1962. The first theory to be designed was the Rational Actor Model. This model focusses on the ‘logic of consequences’ and has a basic assumption of rational actions of a unitary actor. The second theory designed by Allison and Zelikow is the Organizational Behavioural Model. This model focusses on the ‘logic of appropriateness’ and has a main assumption of loosely connected allied organizations (Broekema, 2019).

The third model thought of by Allison and Zelikow is the Governmental Politics Model (GPM). This model reviews the importance of power in decision-making. According to the GPM decision making has not to do with rational/unitary actors or organizational output but everything with a bargaining game. This means that governments make decisions in other ways, according to the GPM there are four aspects to this. These aspects are: the choices of one, the results of minor games and of central games and foul-ups (Allison & Zelikow, 1999).

The following concepts are essential in the GPM. First, it is important to note that power in government is shared. Different institutions have independent bases and, therefore, power is shared. Second, persuasion is an important factor in the GPM. The power to persuade differentiates power from authority. Third, bargaining according to the process is identified, this means there is a structure in the bargaining processes. Fourth, power equals impact on outcome is mentioned in the Essence of Decision making. This means that there is a difference between what can be done and what is actually done, and what is actually done has to do with the power involved in the process. Lastly, intranational and international relations are of great importance to the GPM. These relations are intertwined and involve a vast set if international and domestic actors (Allison & Zelikow, 1999).

Not only the five previous concepts are relevant for the GPM. The GPM is inherently based on group decisions, in this type of decision making Allison and Zelikow identify seven factors. The first factor is a positive one, group decisions, when met by certain requirements create better decisions. Secondly, the agency problem is identified, this problem includes information asymmetric and the fact that actors are competing over different goals. Third, it is important to identify the actors in the ‘game’. This means that one has to find out who participates in the bargaining process. Fourth, problems with different types of decisions are outlined. Fifth, framing issues and agenda setting is an important factor in the GPM. Sixth, group decisions are not necessarily positive, they can lead to groupthink easily. This is a negative consequence and means that no other opinions are considered. Last, the difficulties in collective actions is outlined by Allison and Zelikow. This has to do with the fact that the GPM does not consider unitary actors but different organizations (Allison & Zelikow, 1999).

Besides the concepts mentioned above the GPM consists of a concise paradigm too. This paradigm is essential for the analysis of the Fukushima case. The paradigm exists of six main points. The first main point is the fact that decisions are the result of politics, this is the GPM and once again stresses the fact that decisions are the result of bargaining. Second, as said before, it is important to identify the players of the political ‘game’. Furthermore, one has to identify their preferences and goals and what kind of impact they can have on the final decision. Once this is analysed, one has to look at what the actual game is that is played. The action channels and rules of the game can be determined. Third, the ‘dominant inference pattern’ once again goes back to the fact that the decisions are the result of bargaining, but this point makes clear that differences and misunderstandings have to be taken into account. Fourth, Allison and Zelikow identify ‘general propositions’ this term includes all concepts examined in the second paragraph of the theory section of this paper. Fifth, specific propositions are looked at, these specify to decisions on the use of force and military action. Last, is the importance of evidence. When examining crisis decision making documented timelines and for example, minutes or other account are of great importance (Allison & Zelikow, 1999).

Case

In the definition of Prins and Van den Berg (2018) the Fukushima Daiichi disaster can be regarded as a safety case, this is because it was an unintentional event that caused harm to humans.

The crisis was initiated by an earthquake of 9.0 on the Richter scale which was followed by a tsunami, which waves reached a height of 10 meters. Due to the earthquake all external power lines, which are needed for cooling the fuel rods, were disconnected. Countermeasures for this issue were in place, however, the water walls were unable to protect the nuclear plant from flooding. This caused the countermeasures, the diesel generators, to be inadequate (Kushida, 2016).

Due to the lack of electricity, the nuclear fuel rods were not cooled, therefore, a ‘race for electricity’ started. Eventually the essential decision to inject sea water was made. Moreover, the situation inside the reactors was unknown. Meltdowns in reactors 1 and 2 already occurred. Because of explosions risks the decision to vent the reactors was made. However, hydrogen explosions materialized in reactors 1,2 and 4. This in turn led to the exposure of radiation to the environment. To counter the disperse of radiation the decision to inject sea water to the reactors was made (Kushida, 2016).

Analysis

This analysis will look into the decision or decisions to inject seawater in the damaged reactors. First, a timeline of the decisions will be outlined to further build on the case study above. Then the events and decisions made will be paralleled to the GPM paradigm with the six main points as described in the theory.

The need to inject sea water arose after the first stages as described in the case study passed. According to Kushida government officials and political leaders began voicing the necessity of injecting the water at 6:00 p.m., the day after the earthquake, on March 12. It would according to these officials have one very positive outcome, namely, the cooling of the reactors and the fuel pool. However, the use of sea water might have negative consequences too. It would ruin the reactors because of the salt in the sea water and it would produce vast amounts of contaminated water which would be hard to contain (Kushida, 2016). TEPCO experienced many difficulties with cooling the reactors, as is described in the case study, because of the lack of electricity. However, they were averse to injecting sea water into the reactors since this would ruin them. Still, after the first hydrogen explosion occurred in reactor one TEPCO plant workers started the injection of sea water in this specific reactor (Holt et al., 2012). A day later, on March 13, sea water injection started in reactor 3. On the 14th of March, seawater injection started in reactor 2 (Holt et al., 2012).

When looking at the decisions made by the government or TEPCO plant workers it is crucial to consider the chain of decision making by TEPCO leadership too. TEPCO leadership was in the first instance not very positive towards injecting seawater because of the earlier mentioned disadvantages, the plant would become unusable in the future and vast amounts of contaminated water would be created. Therefore, the government had to issue an order to TEPCO to start injecting seawater. They did so at 8:00 p.m. on 12 March. However, Yoshida, the Fukushima Daiichi Plant Manager already started injecting seawater at 7:00 p.m. (Kushida, 2016).

As one can already see different interests were at play and the outcome of the eventual decision can well be a political resultant. Therefore, it is crucial to examine the chain of decisions through the GPM paradigm. The first factor of this paradigm concerns decisions as a result of bargaining, this can clearly be seen in the decision to inject seawater. TEPCO leadership initially was not a proponent of this method, however, after government officials ordered them to execute the injection they had no choice. Second, according to the theory, it is important to identify the players of the ‘game’ and their goals. In this instance these divisions are easily identifiable, three different players can be pointed out. The different players are the government, TEPCO leadership and Yoshida, the plant manager. The Government has as a goal to keep their citizens safe during the crisis, TEPCO wanted to maintain the reactor as long as possible, whereas, Yoshida wanted to contain the crisis. This shows there were conflicting goals in that sense.

To further apply the GPM to the decision to inject seawater one can review the comprehensive ‘general proposition’. In this part miscommunication is a very relevant factor. Miscommunication was certainly a big issue in the decision to inject seawater. As said before Yoshida, already started injecting seawater before he received approval from his chiefs. One might even wonder whether or not there was a misunderstanding of the crisis by TEPCO leadership because of the fact that they hesitated to inject seawater necessary to cool the reactors. It can be argued that this hesitation constitutes a great deal of misunderstanding of the crisis since there was no plant to be saved anymore at the time the decision was made.

The fifth and sixth aspect of the GPM paradigm are less relevant to the decisions made. This is because ‘specific proposition’ refers to the use of force, which was not an option in dealing with the Fukushima crisis. The Japanese Self-Defence forces were dispatched to the plant; however, this was to provide electricity (Kushida, 2016). Furthermore, the sixth aspect, evidence is not as important in this case since many scholars, researchers and investigators have written to a great extent about what happened during the Fukushima crisis, more than sufficient information is available.

The political and bargaining game in the decision to inject seawater into the reactors is clearly visible. The different actors in the game had different goals, however, eventually the government won this game and the decision to inject seawater was made. Even before that the plant manager already to inject seawater because the situation was too dire.

Conclusion

This essay reviewed decision making during the Fukushima Daiichi Nuclear Power Plant disaster on the 11th of March 2011. More specifically the decision to inject seawater into the reactors to cool them was scrutinized. This was done by using the Governmental Politics Model. The decision to inject seawater into the reactors was a result of a bargaining game and different actors with different objectives played the decision-making ‘game’.

2019-3-18-1552918037

Tackling misinformation on social media: college essay help online

As the world of social media expands, the ratio of miscommunication rises as more organisations hop on the bandwagon of utilising the digital realm to their advantage. Twitter, Facebook, Instagram, online forums and other websites become the pinnacle of news gathering for many individuals. Information becomes easily accessible to all walks of life meaning that people are becoming more integrated about real life issues. Consumers absorb and take information in as easy as ever before which proves to be equally advantageous and disadvantageous. But, There is an evident boundary in which the differentiation of misleading and truthful information is hard to cross without research on the topic. The accuracy of public information is highly questionable which could easily lead to problems. Despite there being a debate about source credibility in any platform, there are ways to tackle the issue through “expertise/competence (i. e., the degree to which a perceiver believes a sender to know the truth), trustworthiness (i. e., the degree to which a perceiver believes a sender will tell the truth as he or she knows it), and goodwill”. (Cronkhite & Liska (1976)) Which is why it has become critical for this to be accurate, ethical and reliable for the consumers. Verifying information is important regardless of the type of social media outlet. This essay will be highlighting the importance of why information need to fit this criteria.

By putting out credible information it prevents and reduces misconception, convoluted meanings and inconsistent facts which reduce the likeliness of issues surfacing. This in turn saves time for the consumer and the producer. The presence of risk raises the issue of how much of this information should be consumed by the public. The perception of source credibility becomes an important concept to analyse within social media, especially in terms of crisis where rationality reduces and the latter often just take the first thing that is seen. With the increasing amount of information available through newer channels, the idea of releasing information from professionals of the topic devolve away from the producers and onto consumers. (Haas & Wearden, 2003) Many of the public is unaware that this information is prone to bias and selective information sharing which could communicate the actual facts much differently. One such example is the incident of Tokyo Electric Power Co.’s Fukushima No.1 nuclear power plant in 2011, where the plant experienced triple meltdowns. There is a misconception floating around that the food exported from Fukushima is too contaminated with radioactive substances making them unhealthy and unfit to eat. But the truth is that this isn’t the case when strict screening reveals that the contamination is below the government standard to pose a threat. (arkansa.gov.au) Since then, products shipped from Fukushima have dropped considerably in prices and have not recovered since 2011 forcing retailers into bankruptcy. (japantimes.co.jp) But thanks to the use of social media and organisations releasing information out into the public, Fukushima was able to raise funds and receive help from other countries. For example the U.S. sending $100,000 and China sending emergency supplies as assistance. (theguardian.com) This would have been impossible to achieve without the use of sharing credible, reliable and ethical information regarding the country and social media support spotlighting the incident.

Accurate, ethical and reliable information open the pathway for producers to secure a relationship with the consumers which can be used to strengthen their own businesses and expand their industries further whilst gaining support from the public. The idea is to have a healthy relationship without the air of uneasiness where monetary gains and social earnings increase. Social media playing a pivotal role in deciding the route the relationship falls in. But, When done incorrectly, organisations can become unsuccessful when they know little to nothing about the change of dynamics in consumers and behaviour in the digital landscape. Consumer informedness means that consumers are well informed about products or services available with precision influencing their willingness in decisions. This increase in consumer informedness can instigate change in consumer behaviour. (uni-osnabrueck.de) In the absence of accurate, ethical and reliable information, people and organisations will make terrible decisions with no hesitation. Which leads to losses and steps backwards. As Saul Eslake (Saul-Eslake.com) says, “they will be unable to help or persuade others to make better decisions; and no-one will be able to ascertain whether the decisions made by particular individuals or organisations were the best ones that could have been made at the time”. Recently, a YouTuber named Shawn Dawson made a video that sparked controversy to the company ‘Chuck E. Cheese’ for their pizzas slices that do not look like they belong to the whole pizza. He created a theory that part of the pizzas may have been reheated or recycled from other tables. In response Chuck E. Cheese responded in multiple media outlets to debunk the theory, “These claims are unequivocally false. We prep the dough daily for our made to order pizzas, which means they’re not always perfectly round, but they are still great tasting.” (https://twitter.com/chuckecheeses) It is worth bringing up that no information other than pictures back up the claim that they reused the pizza. The food company has also gone far to create a video showing the pizza preparation. To back as the support, ex-employees spoke up and shared their own side of the story to debunk the theory further. It’s these quick responses that saved what could have caused a small downfall in sale for the Chuck E. Cheese company. (washintonpost.com) This event highlights the importance on the release of information that can fall in favour to whoever utilises it correctly and the effectiveness of credible information that should be taken to heart. Credible information is good and bad especially when it has the support of others whether online or real life. The assumption or guess when there is no information available to base from is called a ‘heuristic value’ which is seen associated with information that has no credibility.

Mass media have been a dominant source of finding information (Murch, 1971). They are generally thought and assumed to provide credible, valuable, and ethical information open to the public (Heath, Liao, & Douglas, 1995). However, along with traditional forms of media, newer media are increasingly available for information seeking and reports. According to PNAS (www.pnas.org), “The emergence of social media as a key source of news content has created a new ecosystem for the spreading of misinformation. This is illustrated by the recent rise of an old form of misinformation: blatantly false news stories that are presented as if they are legitimate . So-called “fake news” rose to prominence as a major issue during the 2016 US presidential election and continues to draw significant attention.” This affects how we as social beings perceive and analyse information we see online compared to real life. Beyond just reducing the intervention’s effectiveness, failing to deduce stories from real to false increase the belief of false content. Leading to biased and misleading content that fool the audience. One such incident is Michael Jackson’s death in June 2009 where he died from acute propofol and benzodiazepine intoxication administered by his doctor, Dr. Murray. (nytimes.com) It was deduced from the public that Michael Jackson was murdered on purpose but the court convicted, Dr. Murray of involuntary murder as the doctor proclaimed that Jackson begged him to give more. A fact that was overlooked by the general due to bias. This underlines how information is selectively picked from the public and not all information is revealed to sway the audience. A study conducted online by Jason and his team (JCMC [CQU]) revealed that Facebook users tended to believe their friends almost instantly even without a link or proper citation to a website to backup their claim. “Using a person who has frequent social media interactions with the participant was intended to increase the external validity of the manipulation.” Meaning information online that can be taken as truth or not is left to the perception of the viewer linking to the idea that information online isn’t credible fully unless it came straight from the source. Proclaiming the importance of credible information to be released.

Information has the power to inform, explain and expand on topics and concepts. But it also has the power to create inaccuracies and confusion which is hurtful to the public and damages the reputation of companies. The goal is to move forward not backwards. Many companies have gotten themselves into disputes because of incorrect information which could have easily been avoided through releasing accurate, ethical and reliable information from the beginning. False Information can start disputes and true information can provide resolution. The public has become less attentive to mainstream news altogether which strikes a problem on what can be trusted. Companies and organisations need their information to be accurate and reliable as much as possible to defeat and reduce this issue. Increased negativity and incivility exacerbate the media’s credibility problem. “People of all political persuasions are growing more dissatisfied with the news, as levels of media trust decline.” (JCMC [CQU]) In 2010, Dannon’s ‘Activia Yogurt’ released an online statement and false advertisement that their yogurt had “special bacterial ingredients.” A consumer named, Trish Wiener lodged a complaint against Dannon. The yogurts were being marketed as being “clinically” and “scientifically” proven to boost the immune system while able to help to regulate digestion. However, the judge saw this statement as unproven. As well as many other products in their line that used this statement in their products. “This landed the company a $45 million class action settlement.” (businessinsider.com) it didn’t help that Dannon’s prices for their yogurt was inflated compared to other yogurts in the market. “The lawsuit claims Dannon has spent “far more than $100 million” to convey deceptive messages to U.S. consumers while charging 30 percent more that other yogurt products.” (reuters.com) This highlights how inaccurate information can cost millions of dollars to settle and resolve. However it also showed how the public can easily evict irresponsible producers from their actions and give leeway to justice.

2019-5-2-1556794982

Socio-political significance of Turkey’s emergent neo-Ottoman cultural phenomenon

Over the last decade, Turkey’s cultural sphere has witnessed a motto of Ottomania—a term describing the recent cultural fervor for everything Ottoman. Although this neo-Ottoman cultural phenomenon, is not entirely new since it had its previous cycle back in the 1980s and 1990s during the heyday of Turkey’s political Islam, it now has a rather novel characteristic and distinct pattern of operation. This revived Ottoman craze is discernable in what I call the neo-Ottoman cultural ensemble—referring to a growing array of Ottoman-themed cultural productions and sites that evoke Turkey’s Ottoman-Islamic cultural heritage. For example, the celebration of the 1453 Istanbul conquest no longer merely takes place as an annual public commemoration by the Islamists,[1] but has been widely promulgated, reproduced, and consumed into various forms of popular culture such as: the Panorama 1453 History Museum; a fun ride called the Conqueror’s Dream (Fatih’in Rüyası) at the Vialand theme park; the highly publicized and grossed blockbuster The Conquest 1453 (Fetih 1453); and the primetime television costume drama The Conqueror (Fatih). It is the “banal”, or “mundane,” ways of everyday practice of society itself, rather than the government or state institutions that distinguishes this emergent form of neo-Ottomanism from its earlier phases.[2]

This is the context in which the concept of neo-Ottomanism has acquired its cultural dimension and analytical currency for comprehending the proliferating neo-Ottoman cultural phenomenon. However, when the concept is employed in contemporary cultural debates, it generally follows two trajectories that are common in the literature of Turkish domestic and foreign politics. These trajectories conceptualize neo-Ottomanism as an Islamist political ideology and/or a doctrine of Turkey’s foreign policy in the post-Cold War era. This essay argues that these two conventional conceptions tend to overlook the complexity and hybridity of Turkey’s latest phase of neo-Ottomanism. As a result, they tend to understand the emergent neo-Ottoman cultural ensemble as merely a representational apparatus of the neoconservative Justice and Development Party’s (AKP; Adalet ve Kalkınma Partisi) ideology and diplomatic strategy.

This essay hence aims to reassess the analytical concept of neo-Ottomanism and the emergent neo-Ottoman cultural ensemble by undertaking three tasks. First, through a brief critique of the concept of neo-Ottomanism, I will discuss its common trajectories and limitations for comprehending the latest phase of neo-Ottoman cultural phenomenon. My second task is to propose a conceptual move from neo-Ottomanism to Ottomentality by incorporating the Foucauldian perspective of governmentality. Ottomentality is an alternative concept that I deployed here to underscore the overlapping relationship between neoliberal and neo-Ottoman rationalities in the AKP’s government of culture and diversity. I contend that neoliberalism and neo-Ottomanism are inseparable governing rationalities of the AKP and their convergence has engendered new modes of governing the cultural field as well as regulating inter-ethnic and inter-religious relations in Turkey. And finally, I will reassess the neo-Ottoman cultural ensemble through the analytical lens of Ottomentality. I contend that the convergence of neoliberal and neo-Ottoman rationalities has significantly transformed the relationships of state, culture, and the social. As the cases of the television historical drama Magnificent Century (Muhteşem Yüzyıl) and the film The Conquest 1453 (Fetih 1453) shall illustrate, the neo-Ottoman cultural ensemble plays a significant role as a governing technique that constitutes a new regime of truth based on market mentality and religious truth. It also produces a new subject of citizenry, who is responsible for enacting its right to freedom through participation in the culture market, complying with religious norms and traditional values, and maintaining a difference-blind and discriminatory model of multiculturalism.

A critique of neo-Ottomanism as an analytical concept

Although the concept of neo-Ottomanism has been commonly used in Turkish Studies, it has become a loose term referring to anything associated with the Islamist political ideology, nostalgia for the Ottoman past, and imperialist ambition of reasserting Turkey’s economic and political influence within the region and beyond. Some scholars have recently indicated that the concept of neo-Ottomanism is running out of steam as it lacks meaningful definition and explanatory power in studies of Turkish politics and foreign policy.[3] The concept’s ambiguity and impotent analytical and explanatory value is mainly due to the divergent, competing interpretations and a lack of critical evaluation within the literature.[4] Nonetheless, despite the concept being equivocally defined, it is most commonly understood in two identifiable trajectories. First, it is conceptualized as an Islamist ideology, responding to the secularist notions of modernity and nationhood and aiming to reconstruct Turkish identity by evoking Ottoman-Islamic heritage as an essential component of Turkish culture. Although neo-Ottomanism was initially formulated by a collaborated group of secular, liberal, and conservative intellectuals and political actors in the 1980s, it is closely linked to the consolidated socio-economic and political power of conservative middle-class. This trajectory considers neo-Ottomanism as primarily a form of identity politics and a result of political struggle in opposition to the republic’s founding ideology of Kemalism. Second, it is understood as an established foreign policy framework reflecting the AKP government’s renewed diplomatic strategy in the Balkans, Central Asia, and Middle East wherein Turkey plays an active role. This trajectory regards neo-Ottomanism as a political doctrine (often referring to Ahmet Davutoglu’s Strategic Depth serving as the guidebook for Turkey’s diplomatic strategy in the 21st century), which sees Turkey as a “legitimate heir of the Ottoman Empire”[5] and seeks to reaffirm Turkey’s position in the changing world order in the post-Cold War era.[6]

As a result of a lack of critical evaluation of the conventional conceptions of neo-Ottomanism, contemporary cultural analyses have largely followed the “ideology” and “foreign policy” trajectories as explanatory guidance when assessing the emergent neo-Ottoman cultural phenomenon. I contend that the neo-Ottoman cultural phenomenon is more complex than what these two trajectories offer to explain. Analyses that adopt these two approaches tend to run a few risks. First, they tend to perceiveneo-Ottomanism as a monolithic imposition upon society. They presume that this ideology, when inscribed onto domestic and foreign policies, somehow has a direct impact on how society renews its national interest and identity.[7] And they tend to understand the neo-Ottoman cultural ensemble as merely a representational device of the neo-Ottomanist ideology. For instance, Şeyda Barlas Bozkuş, in her analyses of the Miniatürk theme park and the 1453 Panorama History Museum, argues that these two sites represent the AKP’s “ideological emphasis on neo-Ottomanism” and “[create] a new class of citizens with a new relationship to Turkish-Ottoman national identity.”[8] Second, contemporary cultural debates tend to overlook the complex and hybrid nature of the latest phase of neo-Ottomanism, which rarely operates on its own, but more often relies on and converges with other political rationalities, projects, and programs. As this essay shall illustrate, when closely examined, current configuration of neo-Ottomanism is more likely to reveal internal inconsistencies as well as a combination of multiple and intersecting political forces.

Moreover, as a consequence of the two risks mentioned above, contemporary cultural debates may have overlooked some of the symptomatic clues, hence, underestimated the socio-political significance of the latest phase of neo-Ottomanism. A major symptomatic clue that is often missed in cultural debates on the subject is culture itself. Insufficient attention has been paid to the AKP’s rationale of reconceptualizing culture as an administrative matter—a matter that concerns how culture is to be perceived and managed, by what culture the social should be governed, and how individuals might govern themselves with culture. At the core of the AKP government’s politics of culture and neoliberal reform of the cultural filed is the question of the social.[9] Its reform policies, projects, and programs are a means of constituting a social reality and directing social actions. When culture is aligned with neoliberal governing rationality, it redefines a new administrative culture and new rules and responsibilities of citizens in cultural practices. Culture has become not only a means to advance Turkey in global competition,[10] but also a technology of managing the diversifying culture resulted in the process of globalization. As Brian Silverstein notes, “[culture] is among other things and increasingly to be seen as a major target of administration and government in a liberalizing polity, and less a phenomenon in its ownright.”[11] While many studies acknowledge the AKP government’s neoliberal reform of the cultural field, they tend to regard neo-Ottomanism as primarily an Islamist political agenda operating outside of the neoliberal reform. It is my conviction that neoliberalism and neo-Ottomanism are inseparable political processes and rationalities, which have merged and engendered new modalities of governing every aspect of cultural life in society, including minority cultural rights, freedom of expression, individuals’ lifestyle, and so on. Hence, by overlooking the “centrality of culture”[12] in relation to the question of the social, contemporary cultural debates tend to oversimplify the emergent neo-Ottoman cultural ensemble as nothing more than an ideological machinery of the neoconservative elite.

From neo-Ottomanism to Ottomentality

In order to more adequately assess the socio-political significance of Turkey’s emergent neo-Ottoman cultural phenomenon, I propose a conceptual shift from neo-Ottomanism to Ottomentality. This shift involves not only rethinking neo-Ottomanism as a form of governmentality, but also thinking neoliberal and neo-Ottoman rationalities in collaborative terms. Neo-Ottomanism is understood here as Turkey’s current form of neoconservatism, a prevalent political rationality that its governmental practices are not solely based on Islamic values, but also draws from and produces a new political culture that considers Ottoman-Islamic toleration and pluralism as the foundation of modern liberal multiculturalism in Turkey. Neoliberalism, in the same vein, far from a totalizing concept describing an established set of political ideology or economic policy, is conceived here as a historically and locally specific form of governmentality that must be analyzed by taking into account the multiple political forces which gave its unique shape in Turkey.[13] My claim is that when these two rationalities merge at the cultural domain, they engender a new art of government, which I call the government of culture and diversity.

This approach is therefore less concerned with a particular political ideology or the question of “how to govern,” but more about the “different styles of thought, their conditions of formation, the principles and knowledges that they borrow from and generate, the practices they consist of, how they are carried out, their contestations and alliances with other arts of governing.”[14] In light of this view, and for a practical purpose, Ottomentality is an alternative concept that I attempt to develop here to avoid the ambiguous meanings and analytical limitations of neo-Ottomanism. This concept underscores to the convergence of neoliberal and neo-Ottoman rationalities as well as the interrelated discourses, projects, policies, and strategies that are developed around them for regulating cultural activities and directing inter-ethnic and inter-religious relations in Turkey. It pays attention to the techniques and practices that have significant effects on the relationships of state, culture, and the social. It is concerned with the production of knowledge, or truth, based on which a new social reality of ‘freedom,’ ‘tolerance,’ and ‘multiculturalism’ in Turkey is constituted. Furthermore, it helps to identify the type of political subject, whose demand for cultural rights and participatory democracy is reduced to market terms and a narrow understanding of multiculturalism. And their criticism of this new social reality is increasingly subjected to judicial exclusion and discipline.

I shall note that Ottomentality is an authoritarian type of governmentality—a specific type of illiberal rule operated within the structure of modern liberal democracy. As Mitchell Dean notes, although the literature on governmentality has focused mainly on liberal democratic rules that are practiced through the individual subjects’ active role (as citizens) and exercise of freedom, there are also “non-liberal and explicitly authoritarian types of rule that seek to operate through obedient rather than free subjects, or, at a minimum, endeavor to neutralize any opposition to authority.”[15] He suggests that a useful way to approach to this type of governmentality would be to identify the practices and rationalities which “divide” or “exclude” those who are subjected to be governed.[16] According to Foucault’s notion of “dividing practices,” “[t]he subject is either divided inside himself or divided from others. This process objectivizes him. Examples are the mad and the sane, the sick and the healthy, the criminals and the ‘good boys’.”[17] Turkey’s growing neo-Ottoman cultural ensemble can be considered as such exclusionary practices, which seek to regulate the diversifying culture by dividing the subjects into categorical, if not polarized, segments based on their cultural differences. For instance, mundane practices such as going to the museums and watching television shows may produce subject positions which divide subjects into such categories as the pious and the secular, the moral and the degenerate, and the Sunni-Muslim-Turk and the ethno-religious minorities.

Reassessing the neo-Ottoman cultural ensemble through the lens of Ottomentality

In this final section, I propose a reassessment of the emergent neo-Ottoman cultural ensemble by looking beyond the conventional conceptions of neo-Ottomanism as “ideology” and “foreign policy.” Using the analytical concept of Ottomentality, I aim to examine the state’s changing role and governing rationality in culture, the discursive processes of knowledge production for rationalizing certain practices of government, and the techniques of constituting a particular type of citizenry who acts upon themselves in accordance with the established knowledge/truth. Nonetheless, before proceeding to an analysis of the government of culture and diversity, a brief overview of the larger context in which the AKP’s Ottomentality took shape would be helpful.

Context

Since the establishment of the Turkish republic, the state has played a major role in maintaining a homogeneous national identity by suppressing public claims of ethnic and religious differences through militaristic intervention. The state’s strict control of cultural life in society, in particular its assertive secularist approach to religion and ethnic conception of Turkish citizenship, has resulted in unsettling tensions between ethno-religious groups in the 1980s and 1990s, i.e. the Kurdish question and the 1997 “soft coup.” These social tensions indicated the limits of state-led modernization and secularization projects in accommodating ethnic and pious segments of society.[18] This was also a time when Turkey began to witness the declining authority of the founding ideology of Kemalism as an effect of economic and political liberalization. When the AKP came to power in 2002, one of the most urgent political questions was thus the “the limits of what the state can—or ought for its own good—reasonably demand of citizens […] to continue to make everyone internalize an ethnic conception of Turkishness.”[19] At this political juncture, it was clear that a more inclusive socio-political framework was necessary in order to mitigate the growing tension resulted in identity claims.

Apart from domestic affairs, a few vital transnational initiatives also took part in the AKP’s formulation of neoliberal and neo-Ottoman rationalities. First, in the aftermath of the attacks in New York on September 11 (9/11) in 2001, the Middle East and Muslim communities around the world became the target ofintensified political debates. In the midst of anti-Muslim and anti-terror propaganda, Turkey felt a need to rebuild its image by aligning with the United Nations’ (UN) resolution of “The Alliance of Civilizations,” which called for cross-cultural dialogue between countries through cultural exchange programs and transnational business partnership.[20] Turkey took on the leading role in this resolution and launched extensive developmental plans that were designated to rebuild Turkey’s image as a civilization of tolerance and peaceful co-existence.[21] The Ottoman-Islamic civilization, known for its legacy of cosmopolitanism and ethno-religious toleration, hence became an ideal trademark of Turkey for the project of “alliance of civilizations.”[22]

Second, Turkey’s accelerated EU negotiation between the late 1990s and mid 2000s provided a timely opportunity for the newly elected AKP government to launch “liberal-democratic reform,”[23] which would significantly transform the way culture was to be administered. Culture, among the prioritized areas of administrative reform, was now reorganized to comply with the EU integration plan. By incorporating the EU’s aspect of culture as a way of enhancing “freedom, democracy, solidarity and respect for diversity,”[24] the AKP-led national cultural policy would shift away from the state-centered, protectionist model of the Kemalist establishment towards one that highlights “principles of mutual tolerance, cultural variety, equality and opposition to discrimination.”[25]

Finally, the selection of Istanbul as 2010 European Capital of Culture (ECoC) is particularly worth noting as this event enabled local authorities to put into practice the neoliberal and neo-Ottoman governing rationalities through extensive urbanprojects and branding techniques. By sponsoring and showcasing different European cities each year, the ECoC program aims at promoting a multicultural European identity beyond national borders.[26] The 2010 Istanbul ECoC was an important opportunity for Turkey not only to promote its EU candidacy, but also for the local governments to pursue urban developmental projects.[27] Some of the newly formed Ottoman-themed cultural sites and productions were a part of the ECoC projects for branding Istanbul as cultural hub where the East and West meet. It is in this context that the interplay between the neoliberal and neo-Ottoman rationalities can be vividly observed in the form of neo-Ottoman cultural ensemble.

Strong state, culture, and the social

Given the contextual background mentioned above, one could argue that the AKP’s neoliberal and neo-Ottoman rationalities arose as critiques of the republican state’s excessive intervention in society’s cultural life. The transnational initiatives that required Turkey to adopt a liberal democratic paradigm have therefore given way to the formulation and convergence of these two forms of governmentalities that would significantly challenge the state-centered approach to culture as a means of governing the social. However, it would be inaccurate to claim that the AKP’s prioritization of private initiatives in cultural governance has effectively decentralized or democratized the cultural domain from the state’s authoritarian intervention and narrow definition of Turkish culture. Deregulation of culture entails sophisticated legislations concerning the roles of the state and civil society in cultural governance. Hence, for instance, the law of promotion of culture, the law of media censorship, and the new national cultural policy prepared by the Ministry of Culture and Tourism explicitly indicate not only a new vision of national culture, but also the roles of the state and civil society in promoting and preserving national culture. It shall be noted that culture as a governing technology is not an invention of the AKP government. Culture has always been a major area of administrative concern throughout the history of the Turkish republic. As Murat Katoğlu illustrates, during the early republic, culture was conceptualized as part of the state-led “public service” aimed to inform and educate the citizens.[28] Arts and culture were essential means for modernizing the nation; for instance,the state-run cultural institutions, i.e. state ballet, theater, museum, radio and television, “[indicate] the type of modern life style that the government was trying to advocate.”[29] Nonetheless, the role of the state, the status of culture, and the techniques of managing it have been transformed as Turkey undergoes neoliberal reform. In addition, Aksoy suggests that what distinguishes the AKP’s neoliberal mode of cultural governance from that of the early republic modernization project is that market mentality has become the administrative norm.[30] Culture now is reconceptualized as an asset for advancing Turkey in global competition and a site for exercising individual freedom rather than a mechanism of social engineering. And Turkey’s heritage of Ottoman-Islamic civilization in particular is utilized as a nation branding technique to enhance Turkey’s economy, rather than a corrupt past to be forgotten. To achieve the aim of efficient, hence good, governance, the AKP’s cultural governance has heavily relied on privatization as a means to limit state intervention. Thus, privatization has not only transformed culture into an integral part of the free market, but also redefined the state’s role as a facilitator of the culture market, rather than the main provider of cultural service to the public.

The state’s withdrawal from cultural service and prioritization of the civil society to take on the initiatives of preserving and promoting Turkish “cultural values and traditional arts”[31] lead to an immediate effect of the declining authority of the Kemalist cultural establishment. Since many of the previously state-run cultural institutions now are managed with corporate mentality, they begin to lose their status as state-centered institutions and significance in defining and maintaining a homogeneous Turkish culture that they once did. Instead, these institutions, together with other newly formed cultural sites and productions by private initiatives, are converted into a market place or cultural commodities in competition with each other. Hence, privatization of culture leads to the following consequences: First, it weakens and hollows out the 20th century notion of modern secular nation state, which sets a clear boundary confining religion within the private sphere. Second, it gives way to the neoconservative force, who “models state authority on [religious] authority, a pastoral relation of the state to its flock, and a concern with unified rather than balanced or checked state power.”[32] Finally, it converts social issues that are resulted from political actions into market terms and a sheer matter of culture, which is now left to personal choice.[33] As a result, far from a declining state, Ottomentality has constituted a strong state. In particular, neoliberal governance of the cultural field has enabled the ruling neoconservative government to mobilize a new set of political truth and norms for directing inter-ethnic and inter-religious relations in society.

New regime of truth

Central to Foucault’s notion of governmentality is “truth games”[34]—referring to the activities of knowledge production through which particular thoughts are rendered truthful and practices of government are made reasonable.[35] What Foucault calls the “regime of truth” is not concerned about facticity, but a coherent set of practices that connect different discourses and make sense of the political rationalities marking the “division between true and false.”[36] The neo-Ottoman cultural ensemble is a compelling case through which the AKP’s investment of thought, knowledge production, and truth telling can be observed. Two cases are particularly worth mentioning here as I work through the politics of truth in the AKP’s neoliberal governance of culture and neo-Ottoman management of diversity.

Between 2011 and 2014, the Turkish television historical drama Magnificent Century (Muhteşem Yüzyıl, Muhteşem hereafter), featuring the life of the Ottoman Sultan Süleyman, who is known for his legislative establishment in the 16th century Ottoman Empire, attracted wide viewership in Turkey and abroad, especially in the Balkans and Middle East. Although the show played a significant role in generating international interests in Turkey’s tourism, culinary, Ottoman-Islamicarts and history, etc. (which are the fundamental aims of the AKP-led national cultural policy to promote Turkey through arts and culture, including media export),[37] it received harsh criticism among some Ottoman(ist) historians and warning from the RTUK (Radio and Television Supreme Council, a key institution of media censorship and regulation in Turkey). The criticism included the show’s misrepresentation of the Sultan as a hedonist and its harm to moral and traditional values of society. Oktay Saral, an AKP deputy of Istanbul at the time, petitioned to the parliament for a law to ban the show. He said, “[The] law would […] show filmmakers [media practitioners] how to conduct their work in compliance with Turkish family structure and moral values without humiliating Turkish youth and children.”[38] Recep Tayyip Erdoğan (Prime Minister then) also stated, “[those] who toy with these [traditional] values would be taught a lesson within the premises of law.”[39] After his statement, the show was removed from in-flight-channels of national flag carrier Turkish Airlines.

Another popular media production, the 2012 blockbuster The Conquest 1453 (Fetih 1453, Fetih hereafter), which was acclaimed for its success in domestic and international box offices, also generated mixed receptions among Turkish and foreign audiences. Some critics in Turkey and European Christians criticized the film for its selective interpretation of the Ottoman conquest of Constantinople and offensive portrayal of the (Byzantine) Christians. The Greek weekly To Proto Thema denounced that the film served as a “conquest propaganda by the Turks” and “[failed] to show the mass killings of Greeks and the plunder of the land by the Turks.”[40] A Turkish critic also commented that the film portrays the “extreme patriotism” in Turkey “without any hint of […] tolerance sprinkled throughout [the film].”[41] Furthermore, a German Christian association campaigned to boycott the film. Meanwhile, the AKP officials on the contrary praised the film for its genuine representation of the conquest. As Bülent Arınç (Deputy Prime Minister then) stated, “This is truly the best film ever made in the past years.”[42] He also responded to the questions regarding the film’s historical accuracy, “This is a film, not a documentary. The film in general fairly represents all the events that occurred during the conquest as the way we know it.”[43]

When Muhteşem and Fetih are examined within the larger context in which the neo-Ottoman cultural ensemble is formed, the connections between particular types of knowledge and governmental practice become apparent. First, the cases of Muhteşem and Fetih reveal the saturation of market rationality as the basis for a new model of cultural governance. When culture is administered in market terms, it becomes a commodity for sale and promotion as well as an indicator of a number of things for measuring the performance of cultural governance. When Turkey’s culture, in particular Ottoman-Islamic cultural heritage, is converted into an asset and national brand to advance the country in global competition, the reputation and capital it generates become indicators of Turkey’s economic development and progress. The overt emphasis on economic growth, according to Irving Kristol, is one of the distinctive features that differentiate the neoconservatives from their conservative predecessors. He suggests that, for the neoconservatives, economic growth is what gives “modern democracies their legitimacy and durability.”[44] In the Turkish context, the rising neoconservative power, which consisted of a group of Islamists and secular, liberal intellectuals and entrepreneurs (at least in the early years of the AKP’s rule), had consistently focused on boosting Turkey’s economy. For them, economic development seems to have become the appropriate way of making “conservative politics suitable to governing a modern democracy.”[45] Henceforth, such high profile cultural productions as Muhteşem and Fetih are of valuable assets that serve the primary aim of the AKP-led cultural policy because they contribute to the growth in the related areas of tourism and culture industry by promoting Turkey at international level. Based on market rationality, as long as culture can generate productivity and profit, the government is doing a splendid job in governance. In other words, when neoliberal and neoconservative forces converge at the cultural domain, both culture and good governance are reduced to and measured by economic growth, which has become a synonym for democracy “equated with the existence of formal rights, especially private property rights; with the market; and with voting,” rather than political autonomy.[46]

Second, the AKP officials’ applause of Fetih on the one hand and criticism of Muhteşem on the other demonstrates their assertion of the moral-religious authority of the state. As the notion of nation state sovereignty has become weakened by the processes of economic liberalization and globalization, the boundary that separates religion and state has become blurred. As a result, religion becomes “de-privatized” and surges back into the public sphere.[47] This blurred boundary between religion and state has enabled the neoconservative AKP to establish links between religious authority and state authority as well as between religious truth and political truth.[48] These links are evident in the AKP officials’ various public statements declaring the government’s moral mission of sanitizing Turkish culture in accordance with Islamic and traditional values. For instance, as Erdoğan once reacted to his secular opponent’s comment about his interference in politics with religious views, “we [AKP] will raise a generation that is conservative and democratic and embraces the values and historical principles of its nation.”[49] According to his view, despite Muhteşem’s contribution of generating growth in industries of culture and tourism, it became subjected to censorship and legal action because its content did not comply with the governing authority’s moral mission. The controversy of Muhteşem illustrates the rise of a religion-based political truth in Turkey, which sees Islam as the main reference for directing society’s moral conduct and individual lifestyle. Henceforth, by rewarding desirable actions (i.e. with sponsorship law and tax incentives)[50] and punishing undesirable ones (i.e. through censorship, media ban, and jail term for media practitioners’ misconduct), the AKP-led reform of the cultural field constitutes a new type of political culture and truth—one that is based on moral-religious views rather than rational reasoning.

Moreover, the AKP officials’ support for Fetih reveals its endeavor in a neo-Ottomanist knowledge, which regards the 1453 Ottoman conquest of Constantinople as the foundation of modern liberal multiculturalism in Turkey. This knowledge perceives Islam as the centripetal force for enhancing social cohesion by transcending differences between faith and ethnic groups. It rejects candid and critical interpretations of history and insists on a singular view of Ottoman-Islamic pluralism and a pragmatic understanding of the relationship between religion and state.[51] It does not require historical accuracy since religious truth is cast as historical and political truth. For instance, a consistent, singular narrative of the conquest can be observed in such productions and sites as the Panorama 1453 History Museum, television series Fatih, and TRT children’s program Çınar. This narrative begins with Prophet Muhammad’s prophecy, which he received from the almighty Allah, that Constantinople would be conquered by a great Ottoman soldier. When history is narrated from a religious point of view, it becomes indisputable as it would imply challenge to religious truth, hence Allah’s will. Nonetheless, the neo-Ottomanist knowledge conceives the conquest as not only an Ottoman victory in the past, but an incontestable living truth in Turkey’s present. As Nevzat Bayhan, former general manager of Culture Inc. in association with the Istanbul Metropolitan Municipality (İBB Kültür A.Ş.), stated at the opening ceremony of Istanbul’s Panorama 1453 History Museum,

The conquest [of Istanbul] is not about taking over the city… but to make the city livable… and its populace happy. Today, Istanbul continues to present to the world as a place where Armenians, Syriacs, Kurds… Muslims, Jews, and Christians peacefully live together.[52]

Bayhan’s statement illustrates the significance of the 1453 conquest in the neo-Ottomanist knowledge because it marks the foundation of a culture of tolerance, diversity, and peaceful coexistence in Turkey. While the neo-Ottomanist knowledge may conveniently serve the branding purpose in the post-9/11 and ECoC contexts, I maintain that it more significantly rationalizes the governmental practices in reshaping the cultural conduct and multicultural relations in Turkey. The knowledge also produces a political norm of indifference—one that is reluctant to recognize ethno-religious differences among populace, uncritical of the limits of Islam-based toleration and multiculturalism, and more seriously, indifferent about state-sanctioned discrimination and violence against the ethno-religious minorities.

Ottomentality and its subject

The AKP’s practices of the government of culture and diversity constitute what Foucault calls the “technologies of the self—ways in which human beings come to understand and act upon themselves within certain regimes of authority and knowledge, and by means of certain techniques directed to self-improvement.”[53] The AKP’s neoliberal and neo-Ottoman rationalities share a similar aim as they both seek to produce a new set of ethnical code of social conduct and transform Turkish society into a particular kind, which is economically liberal and culturally conservative. They deploy different means to direct the governed in certain ways as to achieve the desired outcome. According to Foucault, the neoliberal style of government is based on the premise that “individuals should conduct their lives as an enterprise [and] should become entrepreneurs of themselves.”[54] Central to this style of government is the production of freedom—referring to the practices that are employed to produce the necessary condition for the individuals to be free and take on responsibility of caring for themselves. For instance, Nikolas Rose suggests that consumption, a form of governing technology, is often deployed to provide the individuals with a variety of choice for exercising freedom and self-improvement. As such, the subject citizens are now “active,” or “consumer” citizens, who understand their relationship with the others and conduct their life based on market mentality.[55] Unlike the republican citizens, whose rights, duties, and obligations areprimarily bond to the state, citizens as consumers “[are] to enact [their] democratic obligations as a form of consumption”[56] in the private sphere of the market.

The AKP’s neoliberal governance of culture hence has invested in liberalizing the cultural field by transforming it into a marketplace in order to create such a condition wherein citizens can enact their right to freedom and act upon themselves as a form of investment. The proliferation of the neo-Ottoman cultural ensemble in this regard can be understood as a new technology of the self as it creates a whole new field for the consumer citizens to exercise their freedom of choice (of identity, taste, and lifestyle) by providing them a variety of trendy Ottoman-themed cultural products, ranging from fashion to entertainment. This ensemble also constitutes a whole new imagery of the Ottoman legacy with which the consumer citizens may identify. Therefore, through participation within the cultural field, as artists, media practitioners, intellectuals, sponsors, or consumers, citizens are encouraged to think of themselves as free agents and their actions are a means for acquiring the necessary cultural capital to become cultivated and competent actors in the competitive market. This new technology of the self also has transformed the republican notion of Turkish citizenship to one that is activated upon individuals’ freedom of choice through cultural consumption at the marketplace.

Furthermore, as market mechanisms enhance the promulgation of moral-religious values, the consumer citizens are also offered a choice of identity as virtuous citizens, who should conduct their life and their relationship with the others based on Islamic traditions and values. Again, the public debate over the portrayal of the revered Sultan Süleyman as a hedonist in Muhteşem and the legal actions against the television producer, are exemplary of the disciplinary techniques for shaping individuals’ behaviors in line with conservative values. While consumer citizens exercise their freedom through cultural consumption, they are also reminded of their responsibility to preserve traditional moral value, family structure, and gender relations. Those who deviate from the norm are subjected to public condemnation and punishment.

Finally, as the neo-Ottomanist cultural ensemble reproduces and mediates a neo-Ottomanist knowledge in such commodities as the film Fetih and Panorama 1453 History Museum, consumer citizens are exposed to a new set of symbolic meanings of Ottoman-Islamic toleration, pluralism, and peaceful coexistence, albeit through a view of the Ottoman past fixated on its magnificence rather than its monstrosity.[57] This knowledge sets the ethical code for private citizens to think of themselves in relation to the other ethno-religious groups based on a hierarchical social order, which subordinates minorities to the rule of Sunni Islamic government. When this imagery of magnificence serves as the central component in nation branding, such as to align Turkey with the civilization of peace and co-existence in the post 9/11 and ECoC contexts, it encourages citizens to take pride and identify with their Ottoman-Islamic heritage. As such, Turkey’s nation branding perhaps also can be considered as a noveltechnology of the self as it requires citizens, be it business sectors, historians, or filmmakers, to take on their active role in building an image of tolerant and multicultural Turkey through arts and culture. It is in this regard that I consider the neo-Ottoman rationality as a form of “indirect rule of diversity”[58] as it produces a citizenry, who actively participates in the reproduction of neo-Ottomanist historiography and continues to remain uncritical about the “dark legacy of the Ottoman past.”[59] Consequently, Ottomentality has produced a type of subject that is constantly subjected to dividing techniques “that will divide populations and exclude certain categories from the status of the autonomous and rational person.”[60]

2016-10-5-1475705338

ARTIFICIAL NEURAL NETWORK CONTROLLED DSTATCOM FOR POWER QUALITY IMPROVEMENT

Abstract: The main objective of this paper is to develop the artificial neural network control algorithm for the control of DSTATCOM for the improvement of power quality. The presence of nonlinear loads makes the voltage to be deviated and current to be distorted from its sinusoidal waveform quality. Thus harmonics elimination, load balancing and voltage regulation is the heavy task that has to be accomplished to maintain the quality of the power. The performance of any device depends on the control algorithm used for the reference current estimation and gating pulse generation scheme. Thus the artificial neural network based Back Propagation (BP) algorithm has been proposed to generate the triggering pulses for the three phase H bridge inverter(DSTATCOM).The BP-based control algorithm is used for the extraction of fundamental weighted value of active and reactive power components of load currents which are required for the estimation of reference source current. Based on the difference of the target voltage and the generated voltage, the triggering pulse for the inverter is obtained by the BP algorithm. Then the voltage is injected at the point of common coupling to compensate the reactive power. Thus by regulating the voltage and compensation of reactive power, the power quality can be improved. The simulation modelling of the Back propagation algorithm controlled DSTATCOM is presented in this paper.

Index Term: DSTATCOM, Artificial Neural Network, Back propagation (BP) control algorithm, Reference current Estimation, Power quality.

I. MOTIVATION

Power quality in distribution systems affects all the connected electrical and electronics equipment. It is a measure of deviations in voltage, current, frequency of a particular system and associated components. In recent years, use of power converters in adjustable speed drives, power supplies etc. is continuously increasing. This equipment draws harmonics currents from AC mains and increases the supply demands. These loads can be grouped as linear (lagging power factor loads), nonlinear (current or voltage source type of harmonic generating loads), unbalanced and mixed types of loads. Some of power quality problems associated with these loads include harmonics, high reactive power burden, load unbalancing, voltage variation etc.

A variety of custom power devices are developed and successfully implemented to compensate various power quality problems in a distribution system. These custom power devices are classified as the DSTATCOM (Distribution Static Compensator), DVR (Dynamic Voltage Restorer) and UPQC (Unified Power Quality Conditioner). The DSTATCOM is a shunt-connected device, which can mitigate the current related power quality problems. The power quality at the PCC is governed by standards such as IEEE-519-1992, IEEE-1531-2003 and IEC- 61000, IECSC77A etc.

The effectiveness of DSTATCOM depends upon the used control algorithm for generating the switching signals for the voltage source converter and value of interfacing inductors. For the control of DSTATCOM, many control algorithms are reported in the literature based on the instantaneous reactive power theory, deadbeat or predictive control instantaneous symmetrical component theory nonlinear control technique , modified power balance theory, enhanced phase locked loop technique, Adaline control technique, synchronous reference frame control technique, ANN and fuzzy based controller, SVM based controller, correlation and cross-correlation coefficients based control algorithm etc.

In this Paper, the problem of power quality of voltage sag is detected by artificial neural network then trained data and neural network output simulated in neural network block set, then it will be mitigated using DSTATCOM with neural network control block. A feed forward Artificial Neural Network (ANN) has been off-line trained to detect the initial time, the final time and the magnitude of voltage sags and swells. Besides, the designed system will be applied to detect transient voltage in electrical power systems. The performance of the designed measure method will be tested through a simulation platform designed in MATLAB/Simulink through the analysis of some practical cases.

II. BLOCK DIAGRAM OF THE PROPOSED SYSTEM

The block diagram of the proposed system consists of the three phase supply supplying the nonlinear load, DSTATCOM block, interfacing inductor, and the DSTATCOM controller. The DSTATCOM controller used in this project is the Back Propagation method, which is the neural network controlled algorithm.

Figure: Block diagram of the proposed System

III. DSTATCOM

The D-STATCOM is a three phase and shunt connected power electronics based reactive power compensation equipment, which generates and /or absorbs the reactive power whose output can be varied so as to maintain control of specific parameters of the electric power system. The D-STATCOM basically consists of a coupling transformer with a leakage reactance, a three phase GTO/IGBT voltage source inverter (VSI), and a dc capacitor. The DSTATCOM topologies can be classified based on of switching devices, use of transformers for isolation, use of transformers for neutral current compensation.

The ac voltage difference across the leakage reactance power exchange between the D-STATCOM and the Power system, such that the AC voltages at the bus bar can be regulated to improve the voltage profile of the power system, which is primary duty of the D-STATCOM. However a secondary damping function can be added in to the D-STATCOM for enhancing power system oscillation stability. The D-STATCOM provides operating characteristics similar to a rotating Synchronous compensator without the mechanical inertia. The D-STATCOM employs solid state power switching devices and provides rapid controllability of the three phase voltages, both in magnitude and phase angle. The D-STATCOM employs an inverter to convert the DC link voltage Vdc on the capacitor to a voltage source of adjustable magnitude and phase. Therefore the D-STATCOM can be treated as a voltage controlled source. The D-STATCOM can also be seen as a current controlled source.

Figure: Circuit Diagram of VSC- Based DSTATCOM

A voltage source converter (VSC)-based DSTATCOM is connected to a three phase ac mains feeding three phase linear/nonlinear loads with internal grid impedance. The device is realized using six IGBTs (insulated gate bipolar transistors) switches with anti-parallel diodes. Three phase loads may be a lagging power factor load or an unbalance load or a nonlinear load. For reducing ripples in compensating current, interfacing inductors are used at AC side of VSC. A RC filter is connected to the system in parallel with the load and the compensator to reduce switching ripples in the PCC voltage injected by switching of DSTATCOM. The performance of DSTATCOM depends upon the accuracy of harmonic current detection. For controlling the DSTATCOM, the Back propagation, a neural network based control algorithm is used.

The DSTATCOM is operated for the compensation of lagging power factor balanced load to correct the power factor at source side or to regulate the voltage at PCC. In ZVR mode, DSTATCOM injects currents to regulate the PCC voltage at the desired reference value of the voltage and the source currents may be leading or lagging currents depending on the reference value of PCC voltage.

The three basic operation modes of the D-STATCOM output current, I, which varies depending upon Vi.

Figure 7: Operation of DSTATCOM (a) No load mode (Vs=Vi),

(b) Capacitive mode, (c) Inductive mode

The DSTATCOM currents (iCabc) are injected as required compensating currents to cancel the reactive power components and harmonics of the load currents so that loading due to reactive power component/harmonics is reduced on the distribution system. The controller of the D-STATCOM is used to operate the inverter in such a way that the phase angle between the inverter voltage and the line voltage is dynamically adjusted so that the D-STATCOM generates or absorbs the desired VAR at the point of connection. The phase of the output voltage of the inverter Vi, is controlled in the same way as the distribution system voltage.

The DSTATCOM is operated for the compensation of lagging power factor balanced load to correct the power factor at source side or to regulate the voltage at PCC. In ZVR mode, DSTATCOM injects currents to regulate the PCC voltage at the desired reference value of the voltage and the source currents may be leading or lagging currents depending on the reference value of PCC voltage.

IV. DSTATCOM VOLTAGE CONROLLER

The aim of the control scheme is to maintain constant voltage magnitude at the point where a sensitive load is connected, under system disturbances. The control system only measures the r.m.s voltage at the load point, i.e., no reactive power measurements are required. The VSC switching strategy is based on a sinusoidal PWM technique which offers simplicity and good response. Since custom power is a relatively low-power application, PWM methods offer a more flexible option which is the existing system. Thus the new approach of using the artificial neural network controlled controller for DSTATCOM is proposed in this project.

A. EXISTING SYSTEM: Sinusoidal Pulse Width Modulation control

In pulse width modulation control, the converter switches are turned on and off several times during a half cycle and output voltage is controlled by varying the width of the pulses. The gate signals are generated by comparing a triangular wave with a DC signal. The lower order harmonics can be eliminated or reduced by selecting the number of pulses per half cycle. However increasing the number of pulses would also increase the magnitude of higher order harmonics which could easily be filtered out.

The width of the pulses can be varied to control the output voltage. However the pulse width of pulses could be different. It is possible to choose the widths of pulses in such a way that certain harmonics can be eliminated. The most common way of varying the width of the pulses is the Sinusoidal Pulse Width Modulation. In SPWM the displacement factor is unity and the power factor is improved. The lower order harmonics are eliminated and reduced. The SPWM pulses are generated and the DSTATCOM is controlled in the open loop response.

B. PROPOSED SYSTEM: Artificial Neural architecture

A BP algorithm is implemented in a three phase shunt connected custom power device known as DSTATCOM for the extraction of the weighted value of load active power and reactive power current components in nonlinear loads. The proposed control algorithm is used for harmonic suppression and load balancing in PFC and zero voltage regulation (ZVR) modes with dc voltage regulation of DSTATCOM.

In this BP algorithm, the training of weights has three stages.

‘ Feed forward of the input signal training,

‘ Calculation and BP of the error signals,

‘ Upgrading of training weights.

Figure: Standard model of BP algorithm

It may have one or more than one layer. Continuity, differentiability and non-decreasing monotony are the main characteristics of this algorithm. It is based on a mathematical formula and does not need special features of function in the learning process. It also has smooth variation on weight correction due to batch updating features on weights. In the training process, it is slow due to more number of learning steps, but after the training of weights, this algorithm produces very fast trained output response. In this application, the proposed control algorithm on a DSTATCOM is implemented for the compensation of nonlinear loads.

The training method most commonly used is the back propagation algorithm. The initial output pattern is compared with the desired output pattern and the weights are adjusted by the algorithm to minimize the error. The iterative process finishes when the error becomes near null.

V. REFERENCE CURRENT GENERATION

A BP training algorithm is used to estimate the three phase weighted value of load active power current components (wap, wbp and wcp) and reactive power current components (waq , wbq , and wcq) from polluted load currents using the feed forward and supervised principle.

A. DERIVATION OF REFERENCE CURRENTS

Figure: Proposed modeling of BP algorithm

In this estimation, the input layer for three phases (a, b, and c) is expressed as

ILap=wo+ iLauap+ iLbubp+ iLcucp (1)

ILbp=wo+ iLbubp+ iLcucp+ iLauap (2)

ILcp=wo+ iLcucp+ iLauap+ iLbubp (3)

Where wo is the selected value of the initial weight and uap ,ubp ,and ucp are the in-phase unit templates.

In-phase unit templates are estimated using sensed PCC phase voltages (vsa ,vsb and vsc).It is the relation of the phase voltage and the amplitude of the PCC voltage (vt).The amplitude of sensed PCC voltages is estimated as

vt='[2 (vsa2 + vsb2 + vsc2)/3] (4)

The in-phase unit templates of PCC voltages (uap ,ubp , and ucp) are estimated as [13]

uap=vsa/vt ubp=vsb/vt ucp=vsc/vt (5)

The extracted values of ILap ,ILbp and ILcp are passed through a sigmoid function as an activation function, and the output signals (Zap , Zbp , and Zcp) of the feed forward section are expressed as

Zap =f(ILap) = 1/(1 + e’ILap) (6)

Zbp=f(ILbp) = 1/(1 + e’ILbp) (7)

Zcp=f(ILcp) = 1/(1 + e’ILcp) (8)

The estimated values of Zap ,Zbp and Zcp are fed to a hidden layer as input signals. The three phase outputs of this layer (Iap1 , Ibp1 and Icp1 ) before the activation function are expressed as

Iap1 =wo1 + wapZap+ wbpZbp+ wcpZcp (9)

Ibp1 =wo1 + wbpZbp+ wcpZcp+ wapZap (10)

Icp1 =wo1 + wcpZcp+ wapZap+ wbpZbp (11)

Where wo1 ,wap , wbp , and wcp are the selected value of the initial weight in the hidden layer and the updated values of three phase weights using the average weighted value (wp)of the active power current component as a feedback signal, respectively.

The updated weight of phase ‘a’ active power current components of load current ‘wap’ at the nth sampling instant is expressed as

wap(n) = wp(n) + ?? {wp(n) ‘ wap1(n)} f ‘(Iap1)zap(n) (12)

Where wp (n) and wap (n) are the average weighted value of the active power component of load currents and the updated weighted value of phase ‘a’ at the nth sampling instant, respectively, and wap1 (n) and zap (n) are the phase ‘a’ fundamental weighted amplitude of the active power component of the load current and the output of the feed forward section of the algorithm at the nth instant, respectively. f (Iap1 ) and ?? are represented as the derivative of Iap1 components and the learning rate.

Similarly, for phase ‘b’ and phase ‘c,’ the updated weighted values of the active power current components of the load current are expressed as

wbp(n)=wp(n)+?? {wp(n)’wbp1(n)} f'(Ibp1)zbp(n) (13)

wcp(n)=wp(n)+?? {wp(n)’wcp1(n)} f'(Icp1)zcp(n) (14)

The extracted values of Iap1, Ibp1, and Icp1 are passed through a sigmoid function as an activation function to the estimation of the fundamental active components in terms of three phase weights wap1, wbp1 , and wcp1 as

wap1 =f(Iap1) = 1/(1 + e’Iap1) (15)

wbp1 =f(Ibp1) = 1/(1 + e’Ibp1) (16)

wcp1 =f(Icp1) = 1/(1 + e’Icp1) (17)

The average weighted amplitude of the fundamental active power components (wp) is estimated using the amplitude sum of three phase load active power components (wap1 ,wbp1 and wcp1 ) divided by three. It is required to realize load balancing features of DSTATCOM. Mathematically, it is expressed as

wp= (wap1 + wbp1 + wcp1)/3 (18)

First-order low-pass filters are used to separate the low frequency components. ‘k’ denotes the scaled factor of the extracted active power components of current in the algorithm. After separating the low-frequency components and scaling to the actual value because the output of the activation function is between 0 and 1, it is represented as wLpA. Similarly, the weighted amplitudes of the reactive power components of the load currents (waq, wbq, and wcq) of the fundamental load current are extracted as

ILaq=wo+ iLauaq+ iLbubq+ iLcucq (19)

ILbq= wo+ iLauaq+ iLbubq+ iLcucq (20)

ILcq= wo+ iLauaq+ iLbubq+ iLcucq (21)

Where wo is the selected value of the initial weight and uaq, ubq and ucq are the quadrature components of the unit template.

The quadrature unit templates (uaq, ubq, and ucq) of the phase PCC voltage are estimated using (5) as

uaq=(‘ubp+ ucp)/ ‘3, ubq=(3uap + ubp ‘ ucp)/2’3; ucq=(‘3uap + ubp ‘ ucp)/2’3 (22)

The extracted values of ILaq, ILbq, and ILcq are passed through a sigmoid function as an activation function to the estimation of Zaq, Zbq, and Zcq

Zaq=f(ILaq) = 1/(1 + e’ILaq) (23)

Zbq=f(ILbq) = 1/(1 + e’ILbq) (24)

Zcq=f(ILcq) = 1/(1 + e’ILcq) (25)

The estimated values of Zaq, Zbq, and Zcq are fed to the hidden layer as input signals. The three phase outputs of this layer (Iaq1, Ibq1, and Icq1) before the activation function can be represented as

Iaq1 =wo1 + waqZaq+ wbqZbq+ wcqZcq (26)

Ibq1 =wo1 + waqZaq+ wbqZbq+ wcqZcq (27)

Icq1 = wo1 + waqZaq+ wbqZbq+ wcqZcq (28)

Where wo1, waq, wbq, and wcq are the selected value of the initial weight in the hidden layer and the updated three weights using the average weighted value of the reactive power components of currents (wq) as a feedback signal, respectively.

The updated weight of the phase ‘a’ reactive power components of load currents ‘waq’ at the nth sampling instant is expressed as

waq(n) = wq(n) + ?? {wq(n) ‘ waq1(n)} f'(Iaq1)zaq(n) (29)

wq(n) and waq(n) are the average weighted value of the active power component of load currents and the updated weight in the nth sampling instant, respectively, and waq1(n)and zaq(n) are the phase ‘a’ weighted amplitude of the reactive power current component of load currents and the output of the feed forward section of the algorithm at the nth instant, respectively. f(Iaq1) and ?? are presented as the derivative of Iaq1 components and the learning rate.

Similarly, for phase ‘b’ and phase ‘c,’ the updated weighted values of the reactive power current components of the load current are expressed as

wbq(n) =wq(n) + ?? {wq(n) ‘ wbq1(n)} f'(Ibq1)zbq(n) (30)

wcq(n) =wq(n) + ?? {wq(n) ‘ wcq1(n)} f'(Icq1)zcq(n) (31)

The extracted values of Iaq1, Ibq1, and Icq1 are passed through an activation function to the estimation of the fundamental reactive component in terms of three phase weights waq1, wbq1, and wcq1 as

waq1 =f(Iaq1) = 1/(1 + e’Iaq1) (32)

wbq1 =f(Ibq1) = 1/(1 + e’Ibq1) (33)

wcq1 =f(Icq1) = 1/(1 + e’Icq1) (34)

The average weight of the amplitudes of the fundamental reactive power current components (wq) is estimated using the amplitude sum of the three phase load reactive power components of the load current (waq1, wbq1, and wcq1) divided by three. Mathematically, it is expressed as

wq= (waq1 + wbq1 + wcq1)/3 (35)

First-order low-pass filters are used to separate the low frequency component. ‘r’ denotes the scaled factor of the extracted reactive power components in the algorithm. After separating low-frequency components and scaling to the actual value because the output of the activation function is between 0 and 1, it is represented as wLqA.

B. Amplitude of Active Power Current Components of Reference Source Currents

An error in the dc bus voltage is obtained after comparing the reference dc bus voltage vdc*and the sensed dc bus voltage vdc of a VSC, and this error at the nth sampling instant is expressed as

vde(n) = vdc*(n) ‘ vdc(n). (36)

This voltage error is fed to a proportional’integral (PI) controller whose output is required for maintaining the dc bus voltage of the DSTATCOM. At the nth sampling instant, the output of the PI controller is as follows

wdp(n)= wdp(n’1)+kpd {vde(n) ‘ vde(n’1)}+kid vde(n) (37)

Where kpd and kid are the proportional and integral gain constants of the dc bus PI controller. vde(n) and vde(n ‘ 1) are the dc bus voltage errors in the nth and (n ‘ 1)th instant, and wdp(n) and wdp(n ‘ 1) are the amplitudes of the active power component of the fundamental reference current at the nth and(n ‘ 1)th instant, respectively.

The amplitude of the active power current components of the reference source current (wspt) is estimated by the addition of the output of the dc bus PI controller (wdp) and the average magnitude of the load active currents (wLpA) as

wspt= wdp+ wLpA. (38)

C. Amplitude of Reactive Power Components of Reference Source Currents:

An error in the ac bus voltage is achieved after comparing the amplitudes of the reference ac bus voltage vt*and the sensed ac bus voltage vt of a VSC. The extracted ac bus voltage error vt at the nth sampling instant is expressed as

vte(n) = vt*(n) ‘ vt(n) (39)

The weighted output of the ac bus PI controller wqq for regulating the ac bus terminal voltage at the nth sampling instant is expressed as

wqq(n) = wqq(n’1)+kpt {vte(n) ‘ vte(n’1)}+kit vte(n) (40)

Where wqq(n) is part of the reactive power component of the source current and it is renamed as wqq. Kpt and kit are the proportional and integral gain constants of the ac bus voltage PI controller.

The amplitude of the reactive power current components of the reference source current (wsqt) is calculated by subtracting the output of the voltage PI controller (wqq) and the average load reactive currents (wLqA) as

wsqt= wqq’wLqA (41)

D. Estimation of Reference Source Currents and Generation of IGBT Gating Pulses:

Three phase reference source active and reactive current components are estimated using the amplitude of three phase (a, b and c) load active power current components, PCC voltage in-phase unit templates, reactive power current components, and PCC quadrature voltage unit templates as

isap=wsptuap, isbp= wsptubp, iscp= wsptucp (42)

isaq=wsqtuaq, isbq= wsqtubq, iscq= wsqtucq. (43)

The addition of reference active and reactive current components is known as reference source currents, and these are given as

Isa*= isap+ isaq, Isb*= isbp+ isbq, Isc*= iscp+ iscq (44)

The sensed source currents (isa, isb, isc) and the reference source currents (isa*, isb*, isc*) are compared, and current error signals are amplified through PI current regulators; their outputs are fed to a pulse width modulation (PWM) controller to generate the gating signals for insulated-gate bipolar transistors(IGBTs) S1 to S6 of the VSC used as a DSTATCOM

VI. SIMULATION AND RESULTS

A. EXISTING SYSTEM: PWM CONTROLLED DSTATCOM

This shows the Simulink modeling of DSTATCOM in which the gate signals are generated by the PWM controller The VSC switching strategy is based on a sinusoidal PWM technique which offers simplicity and good response. Since custom power is a relatively low-power application, PWM methods offer a more flexible option which is the existing system.

Figure: Simulink modeling of PWM controlled DSTATCOM

B. RESULTS OF PWM CONTROLLED DSTATCOM

The below figure shows the waveform of source currents (isa, isb,isc) load currents (iLa, iLb, iLc) and compensating currents (iCa,iCb, iCc) with PCC line voltage (vab) under unbalanced nonlinear loads.

Figure: Dynamic performance of DSTATCOM with PWM controller

a)VSabc b)ISabc c)ILabc d)ICabc e)Vdc

C. THD ANALYSIS OF PWM CONTROLLED DSTATCOM

Harmonic spectra of phase ‘a’ voltage at PCC (vsa), source current (isa) and load current (iLa) are shown in figure. THDs of the phase ‘a’ at PCC voltage, source current, load current are observed 0.01%, 18.61% and 14.25% respectively.

Figure 14: Waveforms and harmonic spectra of PCC voltage of phase ‘a’

Figure 15: Waveforms and harmonic spectra of Source current of phase ‘a’

Figure 16: Waveforms and harmonic spectra of load current of phase ‘a’

D. PROPOSED SYSTEM: Neural Network Controlled DSTATCOM

The figure shows the modeling of artificial neural network controlled algorithm in the MATLAB/Simulink environment.

Figure : Simulink Modeling of Neural Network Controlled DSTATCOM

E. UNIT TEMPLATE ESTIMATION

The figure shows the mathematical modeling of Unit template estimation which is essential for the reference current calculation in MATLAB/Simulink environment.

Figure: Mathematical modeling of Unit templates

F. REFERENCE CURRENT CALCULATION

The figure shows the mathematical modeling of reference current calculation in MATLAB/Simulink environment.

Figure 19: Mathematical modeling of Reference currents calculation

G. DSTATCOM CONTROLLER

Figure: Mathematical Modeling of DSTATCOM controller

H. RESULTS OF NEURAL NETWORK CONTROLLED DSTATCOM

The figure shows the waveform of source currents (isa, isb,isc) load currents (iLa, iLb, iLc) and compensating currents (iCa,iCb, iCc) with PCC line voltage (vab) under unbalanced nonlinear loads.

Figure : Dynamic Performance of DSTATCOM under Non Linear Load in PFC mode

a)VSabc b)ISabc c)ILa d) ILb e) ILc f)ICa g)ICb h)ICc i)ILabc j)Vdc

I. THD ANALYSIS OF NEURAL NETWORK CONTROLLED DSTATCOM

Harmonic spectra of phase ‘a’ voltage at PCC (vsa), source current (isa) and load current (iLa) are shown in figure. THDs of the phase ‘a’ at PCC voltage, source current, load current are observed 0.02%, 2.46% and 11.50% respectively.

Figure 22: Waveforms and harmonic spectra of PCC voltage of phase ‘a’ in PFC mode.

Figure 23: Waveforms and harmonic spectra of Source current of phase ‘a’ in PFC mode

Figure 24: Waveforms and harmonic spectra of load current of phase ‘a’ in PFC mode

VII. ANALYSIS ON THE PERFORMANCE OF DSTATCOM

Performance parameters DSTATCOM With PWM controller-Non Linear load

( 3 Phase uncontrolled rectifier with RL load) DSTATCOM With BP controller-Non Linear load

( 3 Phase uncontrolled rectifier with RL load)

PCC voltage (V), %THD 338.8 V,0.01% 338.5 V,0.02%

Source current (A), % THD 12.55 A,18.61% 30.1A,2.46%

Load current (A),% THD 40.06A, 14.25% 36.97%, 11.50%

Dc bus voltage (V) 700V 700V

Table 1: Comparative analysis on Performance of DSTATCOM in PFC mode

VIII. PARAMETERS USED IN SIMULATION:

This table shows the parameters that are considered to simulate the PWM controlled DSTATCOM and the Artificial Neural Network controlled DSTATCOM.

PARAMETERS ANN CONTROLLED DSTATCOM PWM CONTROLLED DSTATCOM

AC Supply Source, three phase 415 V(L-L), 50 HZ 415 V(L-L), 50 HZ

Source Impedance Ls= 15 mH Ls= 15 mH

Non-linear: Three phase full bridge uncontrolled rectifier R= 13?? and L= 200mH R= 13?? and L= 200mH

Ripple filter Rf =5 ??, Cf = 10??F Rf =5 ??, Cf = 10??F

Switching frequency of Inverter 10kHz 10kHz

Reference dc bus voltage 700 V 700 V

Interfacing Inductor(Lf) 2.75mH 2.75mH

Gains of PI controller for dc bus voltage kpd =3.1, kid=0.9 kpd =3.1, kid=0.9

Gains of voltage PI controller kpt=2.95, kit =4 kpt=2.95, kit =4

Cut off frequency of low pass filter used in dc bus voltage 15 Hz 15 Hz

Cut off frequency of low pass filter used in ac bus voltage 10Hz 10Hz

Cut off frequency of low pass filter used in dc bus voltage 15 Hz –

Learning rate (??) 0.6 –

Table 2: Parameters of the PWM controlled DSTATCOM and ANN controlled DSTATCOM

IX. CONCLUSION

A VSC based DSTATCOM has been accepted as the most preferred solution for power quality improvement as power factor correction and to maintain rated PCC voltage. A three phase DSTATCOM has been implemented for compensation of nonlinear loads using BPT control algorithm to verify its effectiveness. The proposed BPT control algorithm has been used for extraction of reference source currents to generate the switching pulses for IGBTs of VSC of DSTATCOM. Various functions of DSTATCOM such as, harmonic elimination and load balancing have been demonstrated in PFC and ZVR modes with DC voltage regulation of DSTATCOM.

From simulation and implementation results, it is concluded that DSTATCOM and its control algorithm have been found suitable for compensation of nonlinear loads. These results show satisfactory performance of the BP control algorithm for harmonics elimination according to IEEE-519 guidelines in order of less than 5%. Its performance has been found satisfactory for this application because extracted reference source currents exactly tracing the sensed source currents during steady state as well as dynamic conditions. The DC bus voltage of the DSTATCOM has also been regulated to rated value without any overshoot or undershoots during load variation. Large training time in the application of complex system, selection of number of hidden layer in system is the disadvantage of this algorithm.

REFERENCES

[1] Bhim Singh, P. Jayaprakash, D. P. Kothari, Ambrish Chandra, Kamal Al Haddad ‘Comprehensive Study of DSTATCOM Configurations’ IEEE Transactions on Industrial Informatics, VOL. 10, NO. 2, MAY 2014

[2] Bhim Singh, Sabha Raj Arya ‘Design and control of a DSTATCOM for power quality improvement using cross correlation function approach’ International Journal of Engineering, Science and Technology Vol. 4, No. 1, 2012, pp. 74-86, April 2012

[3] Alpesh Mahyavanshi, M. A. Mulla, R. Chudamani ‘Reactive Power Compensation by Controlling the DSTATCOM’ International Journal of Emerging Technology and Advanced Engineering, Volume 2, Issue 11, November 2012

[4] K.L.Sireesha , K.Bhushana Kumar ‘Power Quality Improvement in Distribution System Using D-STATCOM’ IJEAR Vol. 4, Issue Spl-1, Jan – June 2014s

[5] S. L. Pinjare, Arun Kumar M, ‘Implementation of Neural Network Back Propagation Training Algorithm on FPGA’ International Journal of Computer Applications Volume 52′ No.6, August 2012

[6] Anju Tiwari, Prof. Minal Tomar ‘An Extensive Literature Review on Power Quality Improvement using DSTATCOM’ International Journal of Emerging Technology and Advanced Engineering, Volume 4, Issue 5, May 2014)

[7] Sujin P. Ra, T. Ruban Deva Prakashb, L. Padma Sureshc ‘ANN Based Voltage Flicker Mitigation with DSTATCOM Using SRF Algorithm’ International Journal of Current Engineering and Technology, Vol.2, No.2 (June 2012)

[8] R. C. Dugan, M. F. McGranaghan and H. W. Beaty, Electric Power Systems Quality, 2ed Ed., McGraw Hill, New York, 2006.

[9] Alfredo Ortiz, Cristina Gherasim, Mario Manana, Carlos J. Renedo, L.Ignacio Eguiluz and Ronnie J. M. Belmans,’Total harmonic distortion decomposition depending on distortion origin,’ IEEE Transactions on Power Delivery, vol. 20, no. 4, pp. 2651-2656, October 2005.

[10] Tzung Lin Lee and Shang Hung Hu,’Discrete frequency-tuning activefilter to suppress harmonic resonances of closed-loop distribution powersystems,’IEEE Transactions on Power Electronics, vol. 26, no. 1, pp.137-148, January 2011.

[11] K. R. Padiyar, FACTS Controllers in Power Transmission and Distribution, New Age International, New Delhi, 2008.

[12] IEEE Recommended Practices and requirement for Harmonic Control on electric power System, IEEE Std.519, 1992.

[13] Tzung-Lin Lee, Shang-Hung Hu and Yu-Hung Chan, ‘DSTATCOM with positive-sequence admittance and negative-sequence conductance to mitigate voltage fluctuations in high-level penetration of distributed generation systems, ‘IEEE Transactions on Industrial Electronics, vol.60, no. 4, pp. 1417-1428, April 2013.

[14] B. Singh, P. Jayaprakash and D.P. Kothari,’ Power factor correction and power quality improvement in the distribution system,’ Journal of Electrical India, pp. 40-48, April, 2008.

[15] Jinn-Chang Wu, Hurng Liahng Jou, Ya Tsung Feng, Wen Pin Hsu, Min Sheng Huang, and WenJet Hou, ‘Novel circuit topology for three-phase active power filter, ‘IEEE Transactions on Power Delivery, vol. 22, no. 1, pp. 444-449, January 2007.

Asthma: college admission essay help

INTRODUCTION

Asthma is a common chronic respiratory disease with a global prevalence of more than 200 million. It is a heterogeneous disease identified by reversible airflow obstruction, bronchial hyperresponsiveness (BHR) and inflammation. Treatment of inhaled corticosteroids (ICS) and/or combination of long acting ??-agonist (LABA) may deviate in dosage depending on the measure of severity among patients. Asthma can also be classified into two categories; extrinsic (atopic) and intrinsic (non-atopic) asthma (Fahy, 2014). For this case study, I would be discussing some characteristics of asthma, diagnosis and treatment recommendations and current research in stratified medicine.

ETIOLOGY

Atopic asthma is triggered by environmental stimuli such as allergens (e.g. pollen, pet hair, dust mites etc.), air pollution, weather change and childhood exposure to tobacco smoke. Less than 15% of children with continuous wheezing would develop asthma in adolescent while others with eczema, obesity, atopic rhinitis and dermatitis are at higher risk of developing asthma. These comorbidities may complicate asthma management in adulthood (Subbarao, 2009). Furthermore, there is a higher prevalence of asthma among boys than girls, and a higher incidence among women than men, due to hormonal factors. Boys generally undergo asthma remission as a result of enhanced lung development and airway growth (Spahn, 2008) whereas hormonal influences could affect asthma control in pregnancy (Padmanabhan, 2014).

Other risk factors which could affect the immune system in new-onset asthma are exercise, emotional stress and occupational exposure to chemical substances such as paint, hair dyes, cleaning liquids and the use of marijuana. Viral infections affecting the lungs in childhood (e.g. bronchiolitis) could also affect airway epithelial cells, thus resulting in the development of T-helper (TH2) related asthma (Fahy, 2014).

PATHOPHYSIOLOGY

Asthma is characterized by a cumulative loss of lung function over time. Changes to airway structure and composition such as thickening of basement membrane, increased bronchial vascularity, smooth muscle hyperplasia and hypertrophy and goblet cell hyperplasia, which leads to mucous hypersecretion, also promotes to airflow obstruction. This is known as airway remodelling (Tschumperlin, 2011).

As a result of allergen exposure, inflammatory cells invade airways and releases mediators such as leukotrienes, histamine, cytokines and chemokines triggering bronchoconstriction, airway remodelling and hyper-reactivity, as shown in Table 1 (Padmanabhan, 2014).

SIGNS AND SYMPTOMS

Patients experience wheezing, cough, dyspnea and chest tightness. Symptoms may vary in frequency if treatment is received, depending on their severity and displays hypersensitization to allergens that could trigger exacerbations. The difficulty with this disease is that its symptoms often overlap with other allergies (e.g. allergic rhinitis) making it strenuous to determine the primary cause and relieve symptoms (Padmanabhan, 2014).

DIAGNOSIS

Symptoms that are alleviated by bronchodilators indicates asthma as the underlying cause. Therefore, it’s critical that tests are performed while patients are symptomatic allowing accurate diagnosis.

The age on asthma-onset should also be considered. Although asthma in children and adults share similar characteristics, there are significant differences between them. For example, adult-onset asthma develops sensitization to occupational factors and are often misdiagnosed for COPD or chronic bronchitis (Holgate et al. 2006).

As it is a hereditary disorder, a detailed patient history is required to determine whether there are any signs of family history, atopy or long-term chemical exposure. Asthma displays a decrease in FEV1 (forced expired volume in 1 second) and a reduced FEV1/FVC ratio (0.75) ( 6 )

f_e= 1.88 ‘ ’10’^6 ‘ (t/b)^2 ‘k=[kg/’cm’^2] ( 7 )

f_y = Specified minimum yield point of the material or 72 [%] of the specified minimum ultimate strength whichever is less.

t = Thickness of the web plate reduced by 10 [%] as an allowance for corrosion.

b = Depth of plate panel.

K is a function of different types of loading, aspect ratio and boundary conditions. BigLift is using the following conditions for their buckling analyses.

Figure 2: Load condition Compression and Bending

K corresponding to axial Compression and Bending:

Where a’b ‘1.0=4 ( 8 )

Where a’b 0.75 “then ” f_t= ??_y )'[1- ((3??_y)'(16f_e ))] ( 17 )

In this load case the is f_e'(??_y >0.75 )so f_t is calculated according to equation ( 17 ). These results are used for calculate f_crc:

f_crs= f_t ( 18 )

Then the Compression area is calculated:

A_s= (a+b) ‘t ( 19 )

The force that is actual causing the Compression and Bending is calculated as:

Q=q ‘b ( 20 )

This force is a distributed load on the structure that’s been analysed. Dividing equations ( 20 ) an ( 21 ) gives the stress that occurs.

f_s= Q’A_s ( 21 )

Shear Calculation

The K factor for shear is different than for Compression and Bending. The K factor for Shear needs to be determinate as following:

“If ” a’b >1 “then ” K= ‘3’ [5.34+4 ‘(b’a) ‘^2 ] ( 22 )

“If ” a’b 0.75 “then ” f_t= ??_y )'[1- ((3??_y)'(16f_e ))] ( 26 )

In this load case the is f_e'(??_y >0.75 )so f_t is calculated according to equation ( 26 ). These results are used for calculate f_crc:

f_crs= f_t”3 ( 27 )

Then the shear area is calculated:

A_s=minimum of 2a ‘t or 2b ‘t ( 28 )

The force that is actual causing the shear stress is calculated as:

Q=q ‘b ( 29 )

This force is a distributed load on the structure that’s been analysed. Dividing equations ( 28 ) an ( 29 ) gives the stress that occurs.

f_s= Q’A_s ( 30 )

Conclusion:

In this section the combined calculations of Compression and Bending and Shear are calculating if buckling occurs.

“No Buckling when: ” (f_c’f_crc )^2+'(f_s’f_crs ) ‘^2 1.00 ( 32 )

Finite Element Method

The Finite Element Method (FEM) is a numerical solution of equations (matrixes). In FEM a construction is divided in a finite number of elements. Every element is connected with each other, these connections are called nodes. With the use of matrix equations, it’s possible to calculate the displacement, forces and stresses of these nodes in certain load cases. By using FEM software it’s possible to divide the model in small nodes. Then the program calculates the displacement of every node, which will result in a very careful analysis.

For the complete derivation of the Finite Element Method (Hofman, 1994).

Chapter 5 describes the current buckling calculation according the equation of Euler. But it neglects the surrounding structure and the method. Therefore a more accurate analysis of the complete tanktop structure is necessary. The choice for these analyses is based on the Finite Element Method.

FEM-based tanktop model

In the FEM-based tanktop analyses the model will be exposed on several loads. The analysis that’s been used is the ‘NX Nastran Static and Buckling’. This analysis calculates the model on the static stresses that occur during the load case and on buckling as well. The program calculates a so called: ‘Eigenvalue’, for the buckling situation. This Eigenvalue is a factor for the number of times that the current load could be multiplied before the construction succumbs under buckling.

Step 1

For analysing a similar calculation as BigLift uses in the Excel-sheets, it’s necessary for modelling a plate in FEMAP with the same conditions as used in the Excel calculation. The FEM-calculation reviews the construction based on the mesh. A mesh is an assembly of element. If the elements are small the calculation is more accurate, but it will also result in longer calculation time. For these analyses the mesh size of 50 [mm] by 50 [mm] is used.

Conditions:

Material Steel – [-]

Load Force per Area on Surface 2 [N/mm2]

Constrains Fixed on the Shell – [-]

Analyses NX Nastran Static and Buckling – [-]

Mesh size – 50 [mm]

This will result in several stresses as result from compressing and bending, shear and of course buckling.

The results of this analyses is shown in figure 212

Step 2

Since the FEM analysing of buckling situations is relative new for BigLift Shipping, verifying the results is difficult. Therefore splitting the modelling and analysing was necessary to insure the accuracies of the model. So after verifying that the previous step resulted in a good analyse, the model could be expanded. So stiffeners and other girders where added and the nodes where connected to each other. This resulted in a very basic and local construction of the tanktop. The footprint of the saddle was added as load case. Different values of the load where added to insure the accuracies of the model in different situations.

Conditions:

Material Steel – [-]

Load Force per Area on Surface 2 [N/mm2]

Constrains Fixed on the Shell – [-]

Analyses NX Nastran Static and Buckling – [-]

Mesh size – 50 [mm]

The results of these analyses are shown in figure 221

Step 3

After step 2, the construction of the tanktop is expended. The construction is modelled from frame 52 till frame 58. This is the construction, which according the Distribution calculation sheet of BigLift Shipping, is exposed at the most forces and stresses during sea-going conditions. The footprint of the load case is modelled as a surface with a very high Young’s modules. BigLift Shipping assumes in there calculation that the load is infinitely rigid. With this assumption the relation between the cargo and the construction of the vessel can be ignored. In this final step the analyses calculates the complete construction that’s been modelled. With this step it’s also possible to review the construction that acutely is exposed on the stresses.

Conditions:

Material Steel – [-]

Load Force per Area on Surface 2 [N/mm2]

Constrains Fixed on the Shell – [-]

Analyses NX Nastran Static and Buckling – [-]

Mesh size – 50 [mm]

Analysing calculation results

Implementation

Conclusion

Recommendations

Bibliography

Books

Asmus, K. (2001). Bijzondere Aspecten van de sterkte van Scheepsconstructies.

Hofman, G. E. (1994). Eindige elementen methode: HB Uitgevers.

Lambe, T. W., & Whitman, R. V. (1969). Soil Mechanics: Wiley.

Lewis, E. V. Principles of Naval Architecture (Second Revision), Volume I – Stability and Strength: Society of Naval Architects and Marine Engineers (SNAME).

Okumoto, Y., Takeda, Y., Mano, M., & Okada, T. (2009). Strength Evaluation. In Y. Okumoto, Y. Takeda, M. Mano & T. Okada (Eds.), Design of Ship Hull Structures (pp. 33-80): Springer Berlin Heidelberg.

Taggart, R., Architects, S. o. N., & Engineers, M. (1980). Ship design and construction: Society of Naval Architects and Marine Engineers.

Internet

Biglift. (2015). from http://www.bigliftshipping.com

Chevron. (2015). Wheatstone project. Retrieved 04-03-2015, from http://www.chevronaustralia.com/our-businesses/wheatstone

Rules and Regulations

Rules for Ships, Det Norske Veritas ?? Ch. 1 Sec. 1 (Det Norske Veritas 2009).

Internet Addiction in the Students of Fiji School Of Nursing and Its Impact on Academic Performances: online essay help

1. Introduction

1.1 Background

The world is advancing every day and with that advancement comes along new trends, inventories and creations that has been incorporated into the lifestyles of many people. One of the most astonishing creation of mankind has been the internet. Internet usage as increased drastically in the recent years. Internet is widely used for work, communication, shopping, entertainment and information. However, despite the benefits, the vast increase in internet usage has led to internet addiction in many people. Kim et al (2004), describes internet addiction as a compulsive need to spend a lot of time on the internet to the point where a person’s relationships, work and health suffers. Internet addiction is not only prevalent in developed countries such as South Korea, Japan and USA, where the technology and internet is readily available but also in developing countries such as Malaysia, India, China and even some Pacific Island countries where people recognizing the works of internet, possibly considering it to be a daily lifestyle necessity (Lee et al, 2014). Majority people in the developed countries have access to internet, young old, rich or poor, are frequently online, therefore internet addiction rates in developed countries are higher than developing countries (Wellman & Hogan, 2004). Internet addiction is likely to be present in our society as the number and size of internet uses are increasing per day, however the use of internet is influenced by the socio-economical gap because the poorer folks are not increasing their usage rate in comparison to the wealthier folks (Wellman & Hogan, 2004). It has been known that young adults, mostly college or university students are more likely to go online, then any other population. It has become common amongst the students to be Googling or Facebooking like it’s a daily activity such as eating or sleeping. Young (1996), has stated that addiction to internet is similar to being addicted to drugs, alcohol which ultimately result in academic, social and occupational impairment. The students with internet addiction tend to face severe academic performances since the amount of study time is spent on ‘surfing irrelevant websites, using social media and online gaming’ (Kandell, 1998).

1.2 Problem Statement

Internet addiction may be a new concept for the local societies but that does not mean it is not present. College or university life is known to bring in serious challenges in the lives of many students. Undergoing all these challenges makes the students exposed to the environment influences and one of them being internet addiction. According Mercola (2014), internet addiction or internet use disorder may not yet be defined as a mental disorder under the Statistical Manual of Mental Health Disorders (DSM-5), however many researchers have argued that Internet addiction may be a contributing factor towards or be a borderline addictive disorder. Various countries such as Korea, China and Taiwan has recognize the threat internet addiction brings about as a public health problem and are trying to address the problem (Lee et al, 2014). The purpose of this research is to investigate the existence of internet addictions among the local population of college students, particularly students of Fiji School of Nursing and to explore the relationship of internet addiction and its impact on the academic life of the students. Another reason is that people should be aware of the psychological impairment that is caused by internet addiction especially among the college students.

1.3 Literature Review

According to Byun et al (2008), internet addiction in any individual is assessed through five dimensions, compulsive use, withdrawal, tolerance, interpersonal and health problems, and time management issues. The researchers (Byun et al, 2008) discovered that there is relationship of internet addiction with interpersonal skills and personality and intelligence. It was stated that as an individual’s internet usage or internet addiction increases, then there would be an attention deficit, hyperactivity and impulsiveness increase. The Meta- analysis (Byun et al, 2008) discovered that increasing network capabilities contributes to social isolation and functional impairment of daily activities.

Moving on, Yen et al (2007) states that one of the causes of internet addiction is a result of family factors. Since families play a large role in adolescent development and socialization, family factors are one of the major risk factors for internet addiction. In a quantitative study conducted by Yen et al (2007), it was demonstrated that the negative attitude or behaviours of the families, such as parent- adolescent conflict, lower family function, alcohol use of siblings and abuse, all contribute toward internet addiction in adolescents. This research suggested that internet addiction may be a form of problematic behaviour and ineffective discipline and supervision and poor intrafamily relationships aid in the initiation of problematic behaviours.

On the other hand, Fu et al (2010), recognize internet addiction as a social problem where the people of younger ages are considered to be less self-regulated, coordinated or focused therefore they become more susceptible to media influences so easily. this study (Fu et al,2010) also states that out of 511 participants(students) from 439 households in Japan, 38% of them aged 16 ‘ 24 are categorized to be internet addicted.

One argument placed by Fu et al (2006) is that female students are likely to be addicted to internet. This statement is however, contradicted by Young (1996), Yen et al (2007), Liu & Kuo (2007) and Niemz et al (2005) stating that male students are more likely to be internet addicted. Niemz et al (2005) and Young (1996) state that internet addiction rate is higher in the males as they use internet to fuel their addictions of online gaming, gambling and pornography and unlike females, male find difficulty in admitting that they are facing problems.

Moving on to consequences of internet addiction. In an overview of internet addiction, Murali & George (2006) state that internet addiction affect many aspects of an individual’s life is through interpersonal, social, occupational, psychological and physical aspects. The negative impacts are known to be on the family and social life as the internet addicts tend to neglect regular family, social activities and interests. Also internet addiction also contributes to poor performances in schools and colleges. Psychosocial consequences include loneness, frustration, depression and suicidal tendencies. (Murali & George, 2006)

Further on, the main negative effect of internet addiction of college students lies on the academic performance. According to Akhter (2013), the academic problems by internet addiction include decline in study habits, significant drop in grades, missing classes, being placed on academic probation and poor management of extracurricular activities. Akhter (2013) suggested that university students are considered to be high risk groups of internet addicts because of their available free time, with no parent supervision and a need to escape the tough university life. Akhter (2013) used Young’s internet addiction test to conduct a survey on the undergraduates of National University of science and technology in Pakistan and concluded that 1 in every 10 university student is internet addicted and out of the internet addicted students, 65% of them are likely to fail or drop out of school.

Moving on, another research conducted by Anderson (2001) on college students concluded that excessive internet use of students results in sleep problems and reducing everyday activities which lead to academic failure. From the survey Anderson (2001) discovered that the average time a student is online in a day is for 100 minutes. She also stated that out of all the participants, 16% were confirmed with internet addiction and the total amount of time these students spend online was more 400 minutes. Even Young (1996) states that internet availability does not improve the academic performance of students and 58% of students’ participants in this researched showed signs of decline in study habits and low grades due to internet addiction and out of those participants 43% failed their annual examination.

2. Objectives and Aim

Aims: To find out about the impact of internet addiction on academic performances of the students of Fiji School of Nursing.

Objectives:

‘ To find out the factors associated with the internet addiction in students.

‘ To find out about the direct and indirect impacts of internet addiction on the students.

3. Methodology

3.1 Study type

The study type for this research is a descriptive quantitative study that will be carried out in Fiji School of Nursing. A quantitative study approach will assist in quantifying the relationship internet addiction has with the academic performance of the students. A descriptive study will aid in describing the actual relationship within the variables, and in this case with internet addiction and academic performance. The reason for choosing this study type is that the source of data chosen is questionnaires, as it will make it more effective in gathering data. Questionnaires is the best option for data collection as the students can complete the questions in their free time rather than trying to make time out for interview and such. Another reason questionnaire are best suited for this study is that the focus group of this study is based on a limited number of students.

Study design

This is a prospective cross- sectional study to clarify how internet addiction affects academic performances of students. The reason for choosing a prospective study is because it is an easier and efficient method of gathering data. Prospective study will aid in looking out for the occurrence of internet addiction among the students over a certain period of time and are able to observe the impact of internet addiction on the academic performance of the students. For choosing cross-sectional study, data will be collected from the students of Fiji School of Nursing at one given time and this will be used as the overall picture of the population.

Variables

Some of the variables that the study may include will be:

– The demographical data of students will be: age, gender, and ethnicity

– The number of students that use internet daily.

– The availability of technology to use internet: laptops, tablets, phone, school ICT.

– The number of cases of internet addiction among the students of Fiji school of Nursing.

– The academic performance/ grades of the students of Fiji school of nursing.

– The amount of time students spend studying in a day.

– The amount of data (estimation) a student uses in a month for internet usage.

3.2 Sampling

This study will be carried out in Fiji School of Nursing in Tamavua, Suva, Fiji Islands. A sample of undergraduate students from all the three years, students from each year, year one, year two and year three, of the said school will be recruited in the study. The participants are to be recruited voluntary, with both verbal and written consent given. For this research, stratified sampling technique is being used because from all of the student population of Fiji school of Nursing, year 1, 2, and 3 will be divided into two groups, group one (A) is consistent of students who have access to internet on a daily basis e.g. students with internet phones, laptops and tables. Group two (B) is consistent of students who do not have continuous access to internet on daily basis e.g. students who do not have smartphone or laptops. The questionnaires will be demonstrated amongst only group one (A), the students with access to internet on daily basis.

Inclusion Criteria: Students of Fiji School of Nursing who use internet daily through their internet phones, laptops, tablets or either through school library or ITC (Group A).

Exclusion Criteria: Students who do not use internet daily through smartphones, laptops, tablets, school library or ITC (Group B).

3.3 Plan for data collection

A structured and pre-tested questionnaire will be prepared and distributed among all the three year level of students of Fiji School of Nursing who fall under group one (A) category, those having access to internet on daily basis. The questionnaires will be based on the student’s lifestyle revolving around internet, how often the students use internet, through what means, how often do the students study and does internet usage clashes with their study time and what effect does daily usage of internet have on their grades. The data will be collected from April to June 2015. Informed consent, both oral and written will be taken from the students during the distribution of the questionnaires. The questionnaire will be typed and printed in English and a sample questionnaire will be attached in the annex of the proposal.

3.4 Plan for data processing and analysis

The data will be collected from the structured questionnaire distributes to the students and the questionnaires will be analyze using the Statistical Package for the Social Science (SPSS) for Windows version 21.0 software. SPSS is a programming statistical analysis that is used for managing, analyzing and presenting data.

Confidentiality will be maintained throughout the research, where all the personal data of the participants will be only accessible to the researchers only. The data will be analyzed numerically and quantified. The words will be transformed into quantitative categories through the process of coding. Then the numbers will be analyzed statistically to determine, for instance, the percentage of students facing academic problems due to internet addiction.

3.5 Ethical Consideration.

The ethical approval for carrying out this research will be obtained from the local research committee. Approval will be first send to the Department of research Committee of Fiji school of Nursing and then it will be send to the College of Health Research Ethics Committee and then forwarded to the Fiji National Research Ethics Review Committee. Once the approval for this research is given, a written permission will be asked from the head of School, Fiji School of Nursing, to seek permission to start data collection in the school.

Written and verbal consent will also be gathered from the participants upon distribution of questionnaires. Confidentiality will be maintain and the participants will be assured that no personal data such as name and address will be An information sheet will be given with the consent form, explaining what the research is about, what they are actually asked to participate in, what are the risks for participation and how they got selected to participate the research.

4. Work Plan

Activity M O N T H

March April May June July Aug Sept Oct Nov Dec

Submission of research proposal to DRC

Submission to CHREC

Data collection

Data analysis

Report writing

Dissemination seminar

5. Budget

Expenses Total

1. Data collection and data entry-Research assistant F$80.00

2. Supplies- priniting questionnaires, stationaries, packaging box, F$80.00

3. Telecommunications F$10.00

4. Transport Allownace F$30.00

5. Data Analysis F$80.00

6. Publication F$100.00

Grand Total F$380.00

6. Plan for administration, monitoring, and utilization of results

The work plan will be strictly followed in order to complete the research, data collection and data analysis at the given time. The expenses will be kept within the budget and time and resources will be used wisely. The results will be presented in numeric and percentage form. The results will be presented through bar graphs, pie charts and line graphs. The final report with recommendations will be submitted to the Ministry of Health to the following departments:

‘ Research office, Ministry of Health

‘ Local and Pacific Regional Health research conference

‘ Tutors and students of the Fiji school of Nursing

‘ Presentation in Local Health research conference and symposium

Reference List

Akhter, N. (2013). Relationship between Internet Addiction and Academic Performance among University Undergraduates. Academic Journals, 8(19), 1793-1796. Retrieved from http://www.academicjournals.org/article/article1382342222_Akhter.pdf.

Anderson, K. (2001). Internet Use Among College Students: An Exploratory study. Journal of American Health, 50(1), 20-28. Retrieved from http://faculty.mwsu.edu/psychology/dave.carlston/WritinginPsychology/Internet/2/i5.pdf.

Byun, S., Ruffini, C., Mills, J. E., Douglas, A. C., Niang, M., Stepchenkova, S., Lee, S. K., … Blanton, M. (2009). Internet Addiction: Metasynthesis of 1996’2006 Quantitative Research. CYBERPSYCHOLOGY & BEHAVIOR, 12(2), 203-207. Retrieved from file:///C:/Users/NishantSharma/Downloads/InternetAddictionMetaAnalysis-cpb.2008.0102.pdf.

Fu, K., Chan. W. C,. Wong. C. W. P, & Yip. P.W (2010). Internet addiction: prevalence, discriminant validity and correlates among adolescents in Hong Kong. The British Journal of Psychiatry, 196(6), 486-492. Retrieved on 8315 from http://bjp.rcpsych.org/content/196/6/486.full-text.pdf+html.

KANDELL, J. J. (1998). Internet Addiction on Campus: The Vulnerability of College Students. Cyber Psychology & Behavior, 1(1), 11-17. Retrieved on, 8315 from http://online.liebertpub.com/doi/abs/10.1089/cpb.1998.1.11.

Kim H. S, Chae K. C, Rhim Y. J, Shin Y. M. (2004). Familial Characteristics of Internet Overuse Adolescents. . Korean Association of Medical Journal Editors, 43(6), 733-739. Retrieved on 5315 from http://www.koreamed.org/SearchBasic.php?RID=0055JKNA/2004.43.6.733&DT=1.

Lee, J. Y., Shin, K. M., Cho, S., & Shin, Y. M. (2014). Psychosocial Risk Factors Associated with Internet Addiction in Korea. Psychiatry Investigation, 11(4), 380-386. Retrieved on 8315 from http://synapse.koreamed.org/DOIx.php?id=10.4306/pi.2014.11.4.380.

LIU, C., & KUO, F. (2007). A Study of Internet Addiction through the Lens of the Interpersonal Theory. CYBERPSYCHOLOGY & BEHAVIOR, 10(6), 801-804. Retrieved from http://www.encognitive.com/files/AStudyofInternetAddictionthroughtheLensoftheInterpersonalTheory.pdf.

Mercola. (2014). Internet Addiction is the New Mental Health Disorder. Retrieved March 6, 2015, from http://articles.mercola.com/sites/articles/archive/2012/11/24/internet-addiction.aspx – See more at: http://reffor.us/index.php#sthash.4IuikrZG.dpuf

Murali, V., & George, S. (2006). Lost online: an overview of internet addiction. Advances in Psychiatric Treatment, 13(1), 24-30. Retrieved on 5315 from http://apt.rcpsych.org/content/13/1/24#sec-5.

NIEMZ, K., GRIFFITHS, M., & BANYARD, P. (2005). Prevalence of Pathological Internet Use among University Students and Correlations with Self-Esteem, the General Health Questionnaire (GHQ), and Disinhibition. Cyber Psychology & Behavior, 8(6), 562-568. Retrieved from file:///C:/Users/NishantSharma/Downloads/PathologicalInternetUse_cpb.2005.8.pdf.

Wellman, B., & Hogan, B. (2004). The Immanent Internet. NetLab, Retrieved from, on 8315 http://groups.chass.utoronto.ca/netlab/wp-content/uploads/2012/05/The-Immanent-Internet.pdf.

YEN, J., YEN, C., CHEN, C., CHEN, S., & KO, C. (2007). Family Factors of Internet Addiction and Substance Use Experience in Taiwanese Adolescents. CYBERPSYCHOLOGY & BEHAVIOR, 10(3), 323-329. Retrieved on 8315 from http://ntur.lib.ntu.edu.tw/bitstream/246246/173340/1/16.pdf.

Young K. S. (1996). Internet addiction: the emergence of a new clinical disorder. CyberPsychology & Behavior. 1(3), 237-244. Retrieved on 8315 from, http://www.chabad4israel.org/tznius4israel/newdisorder.pdf

Annex

Questionnaires: Internet Addiction in Students of FSN and its Impact on Academic Achievements

age:

sex: m f

personal status:

student: yes: no:

if yes, what are you studying?

1. How many hours do you spend for surfing in one week?

2. How often do you find that you stay online longer than you intended?

3. How often do you neglect school work to spend more time online?

4. How often do others in your life complain to you about the amount of time

you spend online?

5. How often do your grades or school work suffer because of the amount of

time you spend online?

6. How often do you become defensive or secretive when anyone asks you what

you do online?

7. How often do you block out disturbing thoughts about your life with soothing

thoughts of the Internet?

8. How often do you find yourself anticipating when you will go online again?

9. How often do you fear that life without the Internet would be boring, empty,

and joyless?

10. How often do you snap, yell, or act annoyed if someone bothers you while

you are online?

11. How often do you lose sleep due to late-night log-ins?

12. How often do you try to cut down the amount of time you spend online and

fail?

13. How often do you try to hide how long you’ve been online?

14. How often do you choose to spend more time online over going out with

others?

15. How often do you feel depressed, moody, or nervous when you are off-line,

which goes away once you are back online?

Multicultural competence

In the modern context today, demographic changes are becoming more prominent across the globe and this phenomenon suggests that multicultural counseling is inevitable. Hence, the importance of counselors’ multicultural competence, which refers to the ability to interact effectively to individuals that are culturally of socioeconomically different, cannot be overstated. For this instance, counselor would have to be multicultural competent in order to effectively communicate to client that comes from different culture. With that in mind, this essay I will further discuss the relevance of multicultural competence on the effectiveness of multicultural counseling.

Due to the increasing diversity of culture in the society, individuals seeking help from counselors could show up with various cultural backgrounds. In order to adapt to this arising situation, counselors are required to have an understanding of various way that culture could have an impact on the counseling relationship. For example, a male counselor might be casually greeting his client, who happens to be a female Muslim, with a handshake. Unaware of the Muslims’ cultural background where in males are not permitted to touch or give a handshake to the opposite gender, the counselor is actually performing a forbidden action. This lack of sensitivity to individuals’ cultural background can cause serious consequences, such as, individuals’ refusal to participate, which in turn hinder the development of counseling relationship (Ahmed, Wilson, Henriksen Jr., & Jones, 2011). Therefore, I believe that the counselors’ role have evolved with the diversity of culture, requiring them to be more cultural wary and provide multicultural guidance, in other words, being multicultural competent.

To further elaborate, multicultural competent is the fluency in more than one culture or specifically, whichever culture that the individual is currently in. Moreover, Sue and Sue (2012) defined multicultural competent counselor with three main dimensions. Firstly, being multicultural competence mean to be actively aware of their own assumption about behaviors, values or biases. Secondly, multicultural competence counselor attempts to understand from the perspective of a culturally different client. Lastly, multicultural counselors actively develop and practice intervention strategy when guiding client with different cultural background. All in all, being multicultural competence enables counselor to realize that standard counseling method might not be beneficial to client with different cultural background and also understand that culture is not to be hold accountable for clients’ problem.

However, despite being multicultural competent, cultural groups are not discrete, instead, cultural groups are overlapping. In today’s multicultural context, individuals ultimately acculturate to different culture, which results in a blurring difference between each individual. Furthermore, even if it were possible to classify every client into different subgroup, it would be an insurmountable task to be prepared for every possible client. Therefore, instead of emphasizing on the clients’ differences, Patterson (2004) suggested to employ a universal system that known as the person-centered approach. With reference from the person-centered approach, counselors’ role remains the same, as to assist and guide their client to reach goals and objective, however their focus have been shifted from having proper skill set or technique, being knowledgeable, informative and focuses more on genuinity, empathetic and shows unconditional positive regard (Raskin, Rogers, & Witty, 2008).

Even though it may be impossible to classify every client into a specific subgroup, multicultural training has shown to decrease implicit racial prejudice and increase cultural self-awareness (Castillo, Brossart, Reyes, Conoley, & Phoummarath, 2007). By increasing cultural self-awareness and acknowledging how culture could affect the process of counseling, this will enable the counselor to develop an empathic understanding towards their client. Moreover, by reducing racial prejudice, counselors refrain from judging clients based on their own values or cultural beliefs and also help clients reach goals and objectives without imposing their personal cultural values on clients. Whereas a typical counselor without multicultural competent are seen to be less empathic, lack of cultural-specific knowledge and may even seen as having racial stereotypes or biases (Chang & Berk, 2009). This feeling of mistrust towards the counselor would eventually lead to an undermined counselors’ credibility, which can have an impact on the counseling relationship.

In conclusion, with the numerous studies’ findings, I believe that multicultural competence could improve the effectiveness of counseling. As the society becomes more interconnected through the globalization process, multicultural competent counseling become increasingly important to address the issue that may surface from an array of cultural background. Although it may be difficult to identify which cultural group a particular client belongs to in order to practice specific intervention, being multicultural competent have shown to be generally increasing the effectiveness of counseling by decreasing implicit prejudice and increasing counselors’ self-awareness of personals’ values and clients’ values. Furthermore, focusing on the knowledge and skill set of the counselor, encourage counselor to be more problem-focused rather than emotion-focused, which might cause client to feel that the solution provided were not tailored to their specific life context. Therefore, despite the contrary, counselor who are multicultural competent would still yield benefits in the society today.

Young peoples' perceptions of smoking: college essay help near me

The world Health Organization (WHO 2014) recognises that engaging in risk behaviours, puts you at greater exposure of mortality and morbidity. A risk behaviour has been defined as something that intentionally or unintentionally puts that person at greater risk to themselves, of injury or disease. This essay will look at the risk of smoking in young people, including the health implications, epidemiology and prevalence. An age range of 12-21 year olds will be used when identifying literature. There will be a primary focus on policies and guidance for health improvement in Scotland. In addition legislation and reports from the whole of the UK will support the discussion of health improvement in young people. It will aim to analyse literature to try to determine the reasons why young people smoke, and also consider the rise in social media and electronic cigarettes. Furthermore, it will explore the context of care within schools and the community, and discuss health inequalities. Additionally, this essay will Identify and critique a recent Health improvement Campaign video, aimed at young people. The content and design of the video will be discussed in detail, to analyse the appropriateness for the target age group. Throughout the critique, it will make reference to underpinning models of behaviour change, and health improvement within Scotland and the UK.

Health improvement is at the forefront of Scotland’s current policies and aims. The mission is to build a healthier Scotland, focus on inequalities and develop actions that will improve the overall population health (NHS Scotland 2014a). Policies aim to support everyone in Scotland to live healthier and longer lives, delivered by quality healthcare. The government has set aside national approaches to target underlying causes of poor health, such as smoking (The Scottish Government 2010). The Scottish government recognises that smoking is still one of the leading causes of preventable deaths. It aims to make Scotland a smoke free generation, by targeting health promotion towards young people, in an effort to reduce poor health in later life (The Scottish Government 2008).

In Scotland, a quarter of all deaths are smoking related, with 56,000 people being admitted to hospital each year from smoking related illnesses (ASH 2014). This figure continues to put substantial strain on our national health service (ScotPHO 2012). Smoking increases the risk of Cancers, heart attacks, and stokes. It also worsens and prolongs conditions such as asthma and respiratory diseases (NHS 2013). Early exposure to harmful toxins in tobacco puts you at a greater risk of related cancers. Young smokers are also prone to short and long term respiratory conditions such as wheezing, coughing and phlegm. Girls, in particular, who start smoking at a younger age are 79% more likely to develop bronchitis or emphysema in later life, compared to those who began smoking in adult life (Home Office 2002). The total annual cost of treating smoking related illnesses in Scotland is estimated at around ??409 million. Consequently, one of the NHS Scotland’s heat targets for 2013/2014 was to deliver at least 80,000 successful quits before march 2014 (Gov 2014 Scotland Heat Targets).

A young person is classified by the World health organisation (WHO 2015) and the NHS as those ranging from ages 10 and 24 years old (NHS Health Scotland 2014c). Around 15,000 young people in Scotland start smoking each year (NHS health Scotland 2014b). Although this figure is high, the proportion of young people who have ever smoked, has dropped dramatically by half in the last decade. There has been an improved reduction from 42% of young people in 2003 smoking, to 22% in 2013 (ASH 2014). Evidence has shown that the younger a person begins to smoke, the more likely they are to continue during adulthood. This puts them at increased risk or morbidity and mortality in later life (RCP 2010). It has been discussed that risk behaviours can set life patterns, and similarly have long lasting negative future effects on the persons health and wellbeing (WHO 2015). Most smokers begin smoking before the age of 18, which is why health improvement in young people, is of high importance in the UK (The Information Centre 2006). The UK laws have changed considerably within the last decade in an effort to reduce smoking. In 2007, the legal age a person could purchase tobacco products was increased from the age of 16 to 18 years old (The Secretary of State for Health 2007). One year prior the UK parliament introduced the smoking ban, which prohibited smoking in any public premise (The Secretary of State for Health 2006). The main focus of this legislation, was the protection and health of young people. Early intervention is said to be one of the key areas in reducing mortality and morbidity for young people (Department of health 2013).

More recently, the government have highlighted new concerns about the rise in popularity of electronic cigarettes. There are fears that electronic cigarettes could normalise smoking, thus backtracking on the efforts of the past decade to de-normalise it (Britton and Bogdanovica 2014). There is a real debate on whether electronic cigarettes appeal to young people. Electronic cigarettes come in a variety of exciting flavours such as bubblegum and banana and are marketed in colourful and fun packages, that may be appealing to young people (Public health England 2015). Statistically, however, it has been shown that young people’s use of electronic cigarettes is primarily confined to those who are already experimenting with regular cigarettes (Office For National Statistics 2012). Electronic cigarette use is found to be rare amongst young people who have never smoked before (Ash 2014).

Although statistically the UK and Scotland have shown that smoking in young people seems to be on the decline, it is still clear that a sizable minority of young people still continue to start smoking (Ash 2014). In order to try to campaign for a smoke free nation, it is important to understand the reasons behind why young people smoke. It has been noted that young people are susceptible to what is attractive and risky. Like following fashion, media and the internet, young people want to be in with the crowd. Where you live plays a big role, alongside if your parents or friends smoke. To add to this, positive tobacco advertisement pave the wave for young people to see smoking at exciting and relatively normal (BMA Board of Science 2008). A recent report (Amos et al 2009) summarized their findings on the key reasons young people smoke. These included individual beliefs and self image, social factors such as parents or friends smoking, community factors and ease of access to tobacco. Gough et al (2009) conducted a focus group study. The study invited 87 males and females, aged between 16 and 24 years to talk about reasons for smoking. Although a relatively small study, the focus group found that young people understood smoking to be a rational decision. Although the young people had a very clear awareness of health issues, the majority did not link smoking in young age and health as something to be worried, about until they are ‘older’ (Gough Et al 2009). A larger study in Romania found strong peer influence, alongside lower self-efficiency to be the primary reason for smoking in 13 to 14 year olds (Lotrean 2012). The age range of 13 to 14 year old was not sufficient enough to make a valid argument for the term young people. The latter two of these studies also did not delve much into the connections of youth smoking, being associated with social deprivation. There still continues to still be a strong association of smoking alongside health inequalities. In Scotland it was found that smoking in the most deprived areas equated to 36% of the figures, with only 10% in the least deprived areas (ASH Scotland 2014). Health inequalities is at the heart of public health improvement. The overall health of the public seems to be improving, yet the inequalities of health have worsened and the gap has increased (Health development agency 2005). Other levels of influences noted were price, marketing and promotion, self esteem and values and beliefs (Edwards 2010). A person’s values and beliefs can also play a role in health behaviours.

When looking at health improvement in young people, it is important that everyone working in national and local government, healthcare, social care, and the school and education system all contribute (Department of Health 2012). It has been recognised that school plays a vital role in the education and promotion of young people’s health, to build knowledge of personal wellbeing . School nurses play an important role in health promotion and health education, and can be incredibly valuable members of staff for early intervention. It has been suggested that school nurses, may have a lifelong impact on a young person’s health in adulthood, through early intervention (RCN 2012).Current guidelines dictate that every school must have a no smoking policy. These policies should be widely available, and be visible all over the school so that young people are aware. Schools and school nurses should also support smoking cessation information in partnership with NHS services, and offer help, information and health education to young people on smoking (NICE 2010). However A systematic review of school interventions to stop young people smoking, found no significant effect of interventions in schools to discourage smoking. There was however positive data for interventions which taught young people how to be socially competent, and resist social influences. The strength of this study is the size of the systematic review, which included 134 studies and 428,293 participants. Two authors independently reviewed the data in order to compare and contrast the evidence. On the contrary bias may have been introduced at low level due to the high variability of outcome measures that were used. The trial looked at the age range of five to 18, which only addressed some of the focused age range of youths aged 12-20 (Thomas et al 2013).

Another systematic review and meta-analysis found strong associations between parental and sibling smoking, as a factor for Young people’s uptake of smoking themselves. The analyses confirmed that when young people are exposed to smoke in the household, the chances of them starting smoking themselves are significantly increased (Leonardi-Bee et al 2011). It can be debated therefore that education on health promotion should also start at home, and in communities. The earlier the interventions the more effective it is in preventing health damaging behaviours. Actions need to be taken into a social, environmental and economic level, as well as legislative factors (NICE 2007).

Current UK guidelines advise using a range of strategies to change young people’s perceptions of smoking, and promote health improvement. Resources include posters, leaflets, campaigns and creating new opportunities from arising social media. All of which in an effort to alert young people of the dangers of smoking (NICE 2010). In November 2014 Cancer Research UK launched a UK-wide campaign via YouTube. The video urges young people to use social media, to protest against the tobacco industry (Cancer research 2014). The YouTube video features UK recognised Olympic gold medallist Nicola Adams, and music star wretch 32. The video tells the tobacco industry that young people are no longer puppets on a string and will not be contributing to their industry profits, which make more than coca cola, McDonalds and Microsoft combined. It invites young people to take a ‘selfie’ giving two fingers up to the tobacco industry and post it via twitter and Facebook. The campaign is also supported by UK tobacco control agency ASH. As of March 2015 the YouTube video has received almost a quarter of a million views in just 4 months.

We are currently in a new digital age where social media, and technology are part of daily life for young people. If the government are serious about reaching out to young people they need to step into the new social media and technology world of young people, and fully embrace it (nicholson 2014). The YouTube video by Cancer research aims to get to the very heart of young people, by doing just that. This resource is accessible and approachable for young people, as users can view in privacy, watch on their mobile phones or with friends. The language in the video is very focused on connecting with young people. The video uses words such as “selfie”, “coca cola”, “McDonalds” and “Hashtag”. These are modern words and brands that most young people will recognise. The video is also empowering and revolutionary with inspiring words such as “connected”, “informed” and “talk back”, thus creating a positive message that our generation is smarter, and makes better choices. “Be a part of it” is a phrase near the end, which creates a feeling of wanting to be part of something, and in a group. Recent social media statistics for the UK (Social Media Today 2014) show Facebook now has 31.5 million users, and Twitter has 15 million users. Social media can provide health promotion opportunities for patients, and be used as a communication tool for nurses. Social media can be incredibly powerful), however as professional nurses we must also adhere to professional boundaries (Farrelly 2014) such as NMC guidance for misuse of the internet (NMC 2009).

In order to change risk behaviours, it has been noted that Key elements for success include using resources that are targeted and tailored to the specific age and gender. Similarly alternative choices to risk behaviours should be given, rather than just simply telling an individual to do something (Health Development Agency 2004). The video is encouraging to young people, as the content has connotations of choice.

The YouTube video is unlike current leaflets and posters promoting anti-smoking messages to young people. Instead of listing shock tactics and diseases associated with smoking, it is instead focusing on trying to make smoking sound ridiculous from a financial point of view. It does this by expressing how much money the tobacco industry makes. The characters in the video are relatable, with varying genders and accents. This broadens the appeal of the campaign, instead of focusing on just one target group. The video also asks viewers to upload a picture of themselves to social media, giving the fingers up to the tobacco industry and using the hashtag #smokethis. This appeals to young people, who use social media nowadays as a way of spreading messages and connecting to others. A leaflet or poster may not have the same effect, as there is no social interaction. A twitter or Facebook post, however, can be shared and viewed worldwide. This enables young people to feel like they have a voice and a sense of empowerment. Social media is powerful. Although the video is about promoting anti-smoking, it may also be accessible for young people to share worldwide in negative forms. The hashtag #smokethis has recently been used showing young people uploading photos of themselves smoking both cigarettes and cannabis, in rebellion.

The world health organisation identified key underlying principles of health promotion in their Health for All and Health 21 movements (WHO 1999). These included equity, empowerment, participation, co-operation and primary care. The #Smokethis campaign encompassed both empowerment and participation. The video encourages participation by social media, and empowerment by standing up for something. A new revised health improvement model by Tannahil (2005) suggests that one of the biggest factors in health promotion is social and economic factors. The video encompasses both by showing how much money is wasted on the tobacco industry. It is also relevant to his earlier model of health improvement where he mentions the importance of health education and prevention. This video is very much preventative, in that it is trying to prevent the uptake of young smokers.

Cancer research have been clever in taking a new social media approach. In 2010, (Jepson et al 2010) did a study. They found that although some evidence suggests that media interventions may be effective in reducing the uptake of smoking in young people, the overall evidence was not strong. It would be interesting to see the findings of a more recent study, given the rise of social media in the last 5 years.

The main theme of the video is to try to recruit 100,000 young people to not start smoking this year. It is estimated that in the whole of the UK, 207,000 young people start smoking each year (Hopkinson 2013). Given this figure, if it were possible to half the amount of young people who start smoking each year, it would have a dramatic impact on the relevance for clinical practice. Not only would it decrease the amount of admissions in GP practices for symptoms such as coughing and wheezing, it would also have an incredible effect on later life hospital admissions such as heart disease, stokes and cancers.

The only negative about this video is that there is also the possibility for young people to share pictures of themselves smoking online, as a rebellious stance. This may influence the views of other young people. Is it still does not address the issue of health inequalities and community factors either, which remains an issue in the background as a reason for smoking. It has been well documented by the 1980 The Black Report to show that those in a lower social class have a higher risk of illness and premature death than those in a higher class. Rates of substance abuse are also higher (Department of Health 1980). As well as health promotion online and UK campaigns, there still needs to be community, social, school and family interventions to tackle those who are less deprived. An example is a study by Bond et all (2011), in which they found residents in disadvantaged areas of Glasgow had higher rates of smoking, and less likelihood of quitting smoking. The study found that area with better housing had better rates of quitting suggesting that your environment plays a key part in your health.

As a whole, the Cancer Research video is inventive, modern and appropriate for the target age range. It is easily accessible and creates discussion and the opportunity to be involved in something. Mass media campaigns can promote health improvement. However, there still needs to be approaches such as family, community and school interventions to address health inequalities and social circumstances affecting behaviour. The statistics have shown a steady decline in young people smoking, which is encouraging. The UK is currently in the process of introducing plain packaging on all tobacco products, in a further effort to discourage people from smoking (Barber and Conway 2015).

Statistically the UK and Scotland show that each year, fewer young people begin smoking. Despite the efforts of the government, legislation and regulations may not always discourage young people from smoking. As the UK prepares for further legislation to introduce plain tobacco packaging, it is evident that it is becoming increasing more difficult for young people to access tobacco. It is indisputable however that social factors, peer pressure and health inequalities continue to be an underlying cause of risk behaviours. There is also some contrasting literature on whether health promotion in schools can discourage young people smoking. Despite this, best practice would suggest early intervention is better than no intervention. Social media is on the rise and is quickly becoming a daily habit for young people. They use is to connect, talk, share and interact. It is constantly changing and requires healthcare professionals to be up to date on it. More recent studies would be beneficial to determine the effect of health improvement interventions via YouTube and Facebook. It could potentially become one of the biggest communication tools, for nurses in future practice when looking to get the heart of young people.

A Report on E-Commerce Industry: essay help site:edu

A Study on Indian E-Commerce Industry

Executive Summary

Following report has been made on the ‘Indian E-commerce Industry’. All the data has been collected from the internet, research papers and surveys. The e-commerce is one of the biggest things that have taken the Indian business by storm. It is creating an entire new economy, which has a huge potential and is fundamentally changing the way businesses are done. It has advantages for both buyers as well as sellers and this win-win situation is at the core of its phenomenal rise. Rising incomes and a greater variety of goods and services that can be bought over the internet is making buying online more attractive and convenient for consumers all over the country.

The report gives information on different aspects of the Indian E-commerce industry. For simplicity it is divided into various chapters.

Chapter 1 ‘ Introduces to the Indian E-commerce industry and defines its importance to the economy. The objective of the study is set in this chapter.

Chapter 2 – Briefs about the Global and Indian scenario. Comparison of U.S and Indian E-commerce

Chapter 3 ‘ Provides a brief insight into the Structure of the industry with the HHI index the Chapter 4 ‘ Has the macro environmental analysis of E-Commerce industry which includes the PESTEL analysis.

Chapter 5 – Briefs the Competitive analysis using Porters 5 Forces.

Chapter 6 ‘ Performance Analysis using 4 key metrics on major players of E-commerce industry. Also included the Internet users traffic comparison on top 10 e-commerce sites in India.

A Study on Indian E-Commerce Industry

7

CHAPTER 1

INTRODUCTION

A Study on Indian E-Commerce Industry

8

1.1 Introduction

Since from few years, Growth of internet has changed the world in terms of expectations of

the consumer and consumer behaviour. Online websites for product purchasing concentrate

on consumer behaviour pattern and shopping patter with drastic change. Company are now a

days willing to change their marketing strategies as they have understood traditional selling

practices won’t work with changing technology world.

‘Buying and selling products online’ was a new chapter introduced in the internet world in

year 1991. Ebay.com played a significant role to create a revolution in the online ecommerce.

Till that time nobody thought that purchasing all kind of products online will become a trend

in world and India will share a good part of success. In the late 1990’s, ReddifShopping.com

and Ebay.com gave a first-hand experience for Indians towards E-retail.

Initially when it use to come for weekly shopping of groceries, clothes, shoes and cosmetics,

majority of the people preferred to get into their cars and drive to the supermarket to get the

groceries, shops or malls for other basic products. Now it is becoming common news that

people even buy their groceries online in India.

1.2 Business Model of E-commerce Industry

A Study on Indian E-Commerce Industry

9

Deals Websites

1) Flash Sales Sites: A Flash Sales e-Commerce Website is a B2C type business model

where the website sells the products directly to the end customers. They normally

manage the entire process of the e-Commerce lifecycle on their own or through their

partners and the consumer make the payment directly to the business owning and

managing the website. The consumers take the benefit of huge discounts, which at

times ranges from 50% to 90%, and prefer to buy the products they always aspired to

own. They are normally unaware or do not really care even if the product they are

buying from such websites are obsolete or no longer in fashion.

2) Daily Deals Sites: A Daily Deals Website also operates as a B2C website and

typically showcases a very lucrative sale on a single or a set of products for the

customers. Unlike the Flash Sales website, such a deal is time bound (usually for a

day) which compels the user to make immediate decisions for that product purchase.

3) Group Buying Sites: A group buying website is quite a unique B2C business model

where the website invites the buyers to buy the products / services at a discounted or

at a wholesale price. Like the Daily Deals site, the products advertised on the group

buying sites are also time bound, but usually not limited to a day.

Online Subscriptions

An online subscription website works in a similar manner like an offline subscription

for any kind of service. Such websites showcase the entire catalogue of subscription

options for the users to choose from and subscribe online.

E-Retailing

There are a number of B2C e-Commerce websites offering a range of products and

services to customers across different brands and categories. Such websites buy the

products from the brands or their distributors and sell to the end customer on market

competitive prices. Though their modus operandi is same like a flash sales website,

but their business objective is to offer the latest products to the end customers online

at the best possible prices.

A Study on Indian E-Commerce Industry

10

Marketplace

Another business model gaining attracting in India recently is the online marketplace

model which enables the buyers get in touch with the sellers and makes a transaction.

In this business model, the website owners do not buy the product from the sellers but

act as mediators to facilitate the entire e-commerce transaction. They do assist the

sellers in various services like payment collection, logistics, etc. but do not prefer to

hold inventory in their own warehouse.

1) C2C Marketplace

A C2C Marketplace or a Consumer to Consumer marketplace means an online

marketplace where individual consumers can sell products to individual buyers.

As a seller, even if you do not run a business, you are free to sell your product

through such marketplace to the end customers.

2) B2C Marketplace

A B2C Marketplace or a Business to Consumer marketplace means an online

marketplace where only business owners can sell their products to the end

customer. The process is more or less the same like that of a C2C Marketplace

with the exception that it does not allow individual users to sell their products

A Study on Indian E-Commerce Industry

11

online. Best example of this kind of a website would be SnapDeal.com which has

now become a B2C Marketplace.

3) B2B Marketplace

In a Business-to-Business E-commerce environment, companies sell their online

goods to other companies without being engaged in sales to consumers. B2B web

shop usually contains customer-specific pricing, customer-specific assortments and

customer-specific discounts. Indiamart.com and TATA groups tatab2b.com are few

popular sites in India.

A Study on Indian E-Commerce Industry

12

Exclusive Brand Stores

This is the latest business model of its kind recently started in India. In this business

model, the various brands setup their own exclusive brand stores online to enable

consumers buy directly from the brand. A few examples of brands operating through

an Exclusive Online Brand Store are Lenovo, Canon, Timex, Sennheiser, HP,

Samsung, Mobilestore.com etc.

1.3 Role of Internet in E-Commerce

Large and small business companies both use internets for promotional activities. Some of

the advantages of E-Commerce are:-

1. Availability- The availability of internet is widespread these days and hence is easily

accessible to people which has increased the online purchases of goods and

commodities.

2. Open to All- Internet open page allows everyone to access and transact from any

global location. Moreover it provides an extension of product choice to the customers

which in turn is not possible at local retail stores, etc .

A Study on Indian E-Commerce Industry

13

3. Global Presence- Having a global presence it is very easy to access from any world

location just by having a laptop and an internet connection. All such happenings have

encouraged E-Commerce as any commodity is just a step away from oneself.

4. Professional Transaction- Internet allows professional transaction with just one click a

professionalism that consists of decision making.

5. Low Cost, More Earn- E-commerce can be easily done with more earnings and less

expenses. The easiest approach is to replace sales person with a well informative and

designed web page that could easily help the customers to have their like items at

their doorsteps with a click of a mouse.

1.4 Changes due to online shopping in India

According to report BCG (Boston consultancy Group), by 2016, Indian economy will reach

$10.8 trillion. In global chart India’s rank is 8th. India ranks top in services and china occupies

top position in goods in terms of export through internet. As of now, June 2014, India has

user base of about 250.2 million. It is estimated that by 2024-26, e-commerce market in India

will be $260 billion.

At an Exponential rate, E-commerce is growing in India. According to recent online retailing

report by Forrester, twenty eight percentage of growth is experienced by every retailer year

after year in 2012. As per the study digital consumers spend alomostaround$1.46 billion on

cyber cafes. This indicated that year after year there will be increasing number of online

users. Consumer behaviour is one of the biggest reasons of e-commerce boom in India.

Major reasons for online shopping growth in India

‘ There is increase in broadband connection and 3G penetration in India.

‘ Much wider range of products are available then what is available in brick and mortar

retailers.

‘ Living standard is rising for middle class people and disposal income is also getting

high.

A Study on Indian E-Commerce Industry

14

‘ Online products are usually available at discount which is less than the products

available in normal shops.

‘ Lack of time and busy lifestyle for offline shopping.

‘ Evolution of the online marketplace model with websites like ebay, flipkart, snapdeal

etc.

‘ Takes less time for purchasing as there is no need to stand in the queues which usually

done in offline shopping.

1.5 Advantages of E-Commerce

‘ With E-Commerce as an alternative shopping is no longer a barrier for the customers

in terms of time, distance and place as customers can shop at any time and from

anywhere they prefer to.

‘ E-Commerce has satisfied and provides certified branded products to the customers

that even typical Indian customers are ready to buy products such as clothes, shoes,

etc. without even touching or trying them for fitness.

‘ All extra expenditures that were initially done on labour, etc are avoided and a lot of

funds get saved.

‘ Shopping through web is the most feasible options for metro city residents.

‘ Transaction time has reduced.

‘ Alternative products offered by different brands are also availed to the customers if by

chance the current product in a particular brand is unavailable.

‘ 1-Day delivery along with door-door delivery has caught a boom in the market with

the evolution of E-Commerce.

1.6 Disadvantages of E-Commerce

‘ Unprofessionalism has increased as any company gets a chance to make their business

portal out of trust or belief.

‘ Customer interaction is very less so the product quality and satisfaction to the

customer remains a matter for concern.

‘ Hackers and crackers are day in and day out searching for a chance to hack into

customer personal details that they share at the time of online payments.

A Study on Indian E-Commerce Industry

15

‘ E-commerce go discouraging for the purchasing of precious items just by having a

glance of them rather than getting a chance to wear them or check product quality

such as jewellery etc.

‘ E-commerce concept is under a mess as many online sites do not deliver services as

promised at the time of placing the order.

‘ Authenticity for the product is untrustworthy.

1.7 OBJECTIVES OF THE STUDY

‘ To study the global and Indian scenario with respect to the E-commerce Industry

‘ To study the structure of the Indian E-commerce industry

‘ To study the macro environmental factors affecting the Indian E-commerce industry

‘ To analyze the industry using Porters 5 Forces

‘ To study the performance of major players in the industry

‘ To compare the U.S and Indian E-commerce industry

‘ To study the future opportunities and prospective growth in the industry

A Study on Indian E-Commerce Industry

16

CHAPTER 2

GLOBAL AND INDIAN

SCENARIO

A Study on Indian E-Commerce Industry

17

2.1 Status of the global e-commerce industry

Middle class in many of the developing countries, including India, is rapidly embracing

online shopping. However, India falls behind not only US, China and Australia in terms of

Internet density, but also countries like Sri Lanka and Pakistan. Sri Lanka has an internet

penetration of 15 percent. Better internet connectivity and the presence of an internet-savvy

customer segment have led to growth of e-commerce in Sri Lanka with an existing market

size of USD 2 billion. Pakistan, with an internet penetration of 15 percent has an existing

market size of consumer e-commerce of USD 4 billion. Incidentally FDI in inventory-based

consumer ecommerce is allowed in both these countries. (IAMAI-KPMG report, September

2013).

A.T. Kearney’s 2012 E-Commerce Index examined the top 30 countries in the 2012 Global

Retail Development Index’ (GRDI). Using 18 infrastructure, regulatory, and retail-specific

variables, the Index ranks the top 10 countries by their e-commerce potential.

Following are some other major findings of the Index:

i) China occupies first place in the Index. The G8 countries (Japan, United States, United

Kingdom, Germany, France, Canada, Russia, and Italy) all fall within the Top 15.

A Study on Indian E-Commerce Industry

18

ii) Developing countries feature prominently in the Index. Developing countries hold 10 of

the 30 spots, including first-placed China. These markets have been able to shortcut the

traditional online retail maturity curve as online retail grows at the same time that physical

retail becomes more organized. Consumers in these markets are fast adopting behaviors

similar to those in more developed countries.

iii) Several “small gems” are making an impact. The rankings include 10 countries with

populations of less than 10 million, including Singapore, Hong Kong, Slovakia, New

Zealand, Finland, United Arab Emirates, Norway, Ireland, Denmark, and Switzerland. These

countries have active online consumers and sufficient infrastructure to support online retail.

iv) India is not ranked. India, the world’s second most populous country at 1.2 billion, does

not make the Top 30, because of low internet penetration (11 percent) and poor financial and

logistical infrastructure compared to other countries.

It is seen that countries making in the top list of the table of e-commerce have required

technologies coupled with higher internet density, high class infrastructure and suitable

regulatory framework. India needs to work on these areas to realize true potential of ecommerce

business in the country.

2.2 Comparison of E-Commerce in US and India

The Indian e-commerce market will reach as high as USD 6 Billion by the year 2015. This

implies that if we compare the e-commerce market with the statistics from the year 2014,

there is a substantial growth of 70%. It is evident that India is slowly becoming like the US in

the area of e-commerce.

It is stated that India has started doing well in the market of e-commerce because of the

growing number of people who have access to mobile Internet. Broadband usage has

increased by about three times in the last two years (2013 & 2014).

A Study on Indian E-Commerce Industry

19

2.2.1 Cash on Delivery (COD) system in India

Cash on Delivery is an Indian thing. US consumers mainly transact online using Credit Cards

or through PayPal. Indians are known to be price sensitive. Even though with heavy

discounting e-retailers are trying to lure them to buy online, consumers are still wary of

making prepaid orders.

The situation is different in US. Even though the discounting games continue, users generally

go with prepayment. This is also because the penetration of credit cards and electronic

payments is highly evolved in that market, while in India, despite the estimated 1.252 billion

strong population of India, only 18.8 million credit cards existed in the country, with

approximately 331 million debit cards till last year. The popularity of CoD is directly

dependent on the trust issues consumers have on the online retailers.

2.2.2 Offers and excitement created among customers

The trend of mega online shopping festivals started in the West. From offline trends of

Macy’s Day parade and Black Friday to taking these festivals online, and creating further

new trends like Cyber Monday.

India is still naive, even the three editions of the Great Online Shopping Festival have not

been able to make the impact they were expected to. The festive season of Diwali is a time

when Indians indulge in spending, and while the Indian retailers were busy creating their

respective versions of ‘mega sales’, US based Amazon was the first to step on this

opportunity with ‘Diwali Dhamaka Sale’. Also, the consumer shopping behaviour is different

in the two countries.

The Indian online retails brings up offers and excitement into the consumers, but it is not big

like how it is in US.

A Study on Indian E-Commerce Industry

20

2.2.3 Type of Product that People Buy Online.

Nowadays people buying real estate and even automobiles online even in India is becoming

common. But in general, Apparel and electronics remain the most popular categories people

buy online, and the same is seen in the US as well. These two categories dominate both the

markets.

An area where India is lacking is online grocery retail. Grocery retail and logistics is highly

evolved in US. In India, it was a taboo, and wasn’t touched for a long time even though the

sentiment was strong. Players are emerging now in this segment, with start-up’s like

BigBasket, LocalBanya etc. and mature players like Reliance Fresh taking the lead.

The local Indian ‘banya’ is still stronger than online stores. The unorganised retail in India is

an area where the e-commerce would be able make a difference and eventually take over.

2.2.4 Logistics and Regulations

The logistics infrastructure is still not up to the mark in India. While India Post is doing a

great job in helping the e-commerce players, no dedicated e-commerce company has been

able to scale up its operations to reach all the postal codes. The penetration of internet in

India is 18% as against 87% in US, is a big hindrance.

When it comes to technology, site crashes, issues in ERP systems etc. are commonly heard

things in India. Whereas in US, the technology is very strong.

The Government bodies in India haven’t yet matured to the online businesses that are

operational. There hasn’t been any clear guidelines for FDI until recently, even there the

Government has decided to treat both online and offline retail alike. Some of the other online

marketplace keeps running into warehousing policy issues. The heavy discounting is another

questionable area, since the Government has clear guidelines for Maximum Retail Price,

there is no lower bar set.

A Study on Indian E-Commerce Industry

21

CHAPTER 3

STRUCTURE OF THE

INDUSRTY

A Study on Indian E-Commerce Industry

22

3.1 HERFINDAHL – HIRSCHMAN INDEX

A commonly accepted measure of market concentration. It is calculated by squaring the

market share of each firm competing in a market, and then summing the resulting numbers.

The HHI number can range from close to zero to 10,000.

The HHI is expressed as: HHI = S1^2 + S2^2 + S3^2 + … + Sn^2 (where Sn is the market

share of the Ith firm) the closer a market is to being a monopoly, the higher the market’s

concentration (and the lower its competition).

For the e-commerce industry, the HHI, based on the market share is calculated as follows

Company Sales(in cr) market

share

HHI index

Flipkart 2846.13 53.89683186 2904.868484

Jabong 202 3.82525044 14.63254093

Myntra 441.58 8.362148958 69.92553519

Snapdeal 830 15.7176132 247.0433646

Amazon 168.99 3.20014392 10.24092111

Ebay 107 2.02624652 4.105674961

naptol 460 8.710966349 75.88093474

Yebhi 120 2.272426004 5.163919944

Yepme 80 1.514950669 2.295075531

bewakoof 25 0.473422084 0.22412847

total sales 5280.7 100 3334.38058

>1800

Highly

concentrated

Source: capitaline.com

The HHI can have a theoretical value ranging from close to zero to 10,000. If there exists

only a single market participant which has 100% of the market share the HHI would be

10,000. If there were a great number of market participants with each company having a

market share of almost 0% then the HHI could be close to zero.

‘ When the HHI value is less than 100, the market is highly competitive.

‘ When the HHI value is between 100 and 1000, the market is said to be not concentrated.

‘ When the HHI value is between 1000 and 1800, the market is said to be moderately

concentrated.

‘ When the HHI value is above 1800, the market is said to be highly concentrated.

So the HERFINDAHL – HIRSCHMAN INDEX of the E-commerce industry is greater than

1800. Hence it is highly concentrated.

A Study on Indian E-Commerce Industry

23

CHAPTER 4

COMPITITIVE ANALYSIS

A Study on Indian E-Commerce Industry

24

4.1 PORTER’S FIVE FORCE ANALYSIS

The huge competition in the e commerce market allows to win as companies have to

keep their prices in check to attract buyers.

Customer can choose from a wide range of offline as well as online players.

Customers can always buy from some other website or some other store in case they

are not satisfied with any one player

Buyers in this industry are customers who purchase products online. Since this

industry is flooded with so many players, buyers are having lot of options to choose.

Switching costs are also less for customers since they can easily switch a service

from one online retail company to other one. Same products will be displayed in

several online retail websites. So, product differentiation is almost low. So, all these

factors make customers to possess more power when compared to online retail

companies.

Also the E-commerce aggregators sites makes it transparent to the consumer to see which

site offers the product in the least price

A Study on Indian E-Commerce Industry

25

4.1.1 Bargaining Power of Buyers

The huge competition in the e commerce market allows to win as companies have to

keep their prices in check to attract buyers.

Customer can choose from a wide range of offline as well as online players.

Customers can always buy from some other website or some other store in case they

are not satisfied with any one player

Buyers in this industry are customers who purchase products online. Since this

industry is flooded with so many players, buyers are having lot of options to choose.

Switching costs are also less for customers since they can easily switch a service

from one online retail company to other one. Same products will be displayed in

several online retail websites. So, product differentiation is almost low. So, all these

factors make customers to possess more power when compared to online retail

companies.

Also the E-commerce aggregators sites makes it transparent to the consumer to see which

site offers the product in the least price

4.1.2 Bargaining Power of suppliers

Tens of millions of sellers list their products on ecommerce marketplaces; hence

their individual bargaining power is limited.

However, sellers can also list their products on multiple platforms and sites,

including Amazon, Etsy.com and various international e-commerce sites. Hence, if

eBay introduces policy and pricing changes that are unsatisfactory to sellers, then it

could result in lower number of product listings on its marketplace.

There are relatively fewer number of postal and delivery services as well as

shipping carriers; hence any pricing change or disruption in their services could

hamper eBay’s ability to deliver products on time. Hence, these carriers hold some

bargaining power.

A Study on Indian E-Commerce Industry

26

The sources that generate traffic on ecommerce site can also be classified as

suppliers. Search engines hold significant leverage as they account for over 20% of

traffic on eBay if we account for both organic and paid search (according to Similar

Web estimates). Changes in Google SEO (search engine algorithm) have a negative

impact on traffic. eBay’ seller marketplace model leads to large amounts of

unstructured data on the site, which is detrimental to its SEO efforts.

Additionally, several referring sites such as slickdeals.net, dealnews.com and social

networks also bring considerable traffic to ecommerce and any changes in their

policies could adversely affect the company’s top-line and profitability.

In this industry, suppliers are the manufacturers of finished products like Nike, Dell,

Apple etc. Online retail companies sell various products ranging from books to

computer accessories to apparels to footwear. Since there are many suppliers for any

particular category, they can’t show power on online retail companies. For example,

if you take computers category, there are many suppliers like Dell, Apple, Lenovo,

and Toshiba who wants to sell their products through these online retail companies.

So, they won’t be having power to control the online retail companies. Online

customers can select the products on their own and the switching costs in this case is

zero. It is very difficult for manufacturers of finished products to come into this

industry because of challenges in Logistics. Online retail industry is important to

suppliers because it acts as one of the channel to sell the products. Now, with most

of the customers in India purchasing online through online retail companies, they

can’t afford to lose this channel. So, they can’t dictate terms with online retail

companies. So, in this industry the supplier power is low.

4.1.3 Competitive Rivalry within the Industry

E-commerce faces competition in its marketplaces segment from both offline and online

players. Customers can buy products from a wide range of retailers, distributors,

auctioneers, directories, search engines, etc and hence the competition is intense.

A Study on Indian E-Commerce Industry

27

Various factors such as price, product selection and services influence the purchasing

decision of customers. E-commerce companies frequently engage in price-based

competition to woo buyers, which limits their ability to raise prices.

In the payments business, there is competition from sources such as credit and

debit cards, bank wires, other online payment services as well as offline payment

mechanisms including cash, check, money order or mobile phones.

Considering the entry of newer players such as Apple Pay and Alibaba, the

competition is expected to heighten in the online payments space.

Competition is very high in this industry with so many players like Flip kart,

Myntra, Jabong, Snap deal, Amazon, India plaza, Homeshop18 etc.

4.1.4 Threat of New Entrants

Given the nature of the business, there is always a threat of new entrants as it is

relatively less costly to enter the market and setup operations. There is no additional

cost incurred to set up any physical stores and locations. In addition, traditional

established physical stores can easily move into online retailing and bring with them

their substantial consumer base. These stores such as Target or Wal-Mart, already enjoy

economies of scale, have recognizable brands and a strong supply chain. So they do

pose strong competition to Amazon.

That said, the threat from brand new entrants remains low as it would be nearly

impossible for a new company to match the cost advantages, economies of scale and

variety of offering as Amazon.com. These advantages will deter most brand new

entrants to the market.

Substantial Economies of Scale

Ecommerce like Amazon works with over 10,000 vendors and boasts an impressive 75

percent repeat purchasers. Its market capitalization is substantially ahead of its nearest

competitors.

A Study on Indian E-Commerce Industry

28

First Mover Advantage

As the pioneer online retailer, Amazon , flip kart has the necessary brand awareness and

credibility as a strong reliable presence in the market.

Massive product Variety

Way beyond a bookstore now, Amazon.com , Flipkart provides any type of product

there is in its online stores. This means that there is a strong supplier base relationship

that cannot be replicated. In addition, as a bookseller and a provider of other

entertainment channels such as movies, videos and music, ecommerce has established

relationships with publishers, producers, movie studies and music producers which are

not easy to form and replicate.

The e-commerce market is characterized by low barriers to entry. It is relatively easy

for newer players to enter the market and start selling products. Having said that, it’s

difficult for newer players to gain brand recognition and attain high ranking on search

engines.

Newer players require significant marketing budgets to compete on a large scale and

this restricts entry of newer players to an extent.

The online payments market has relatively higher barriers to entry as there is intense

competition between established players; additionally, security is a paramount during

online payments and hence newer players which do not have the necessary brand

recognition will find it difficult to attract new customers.

However, established players such as Apple, Amazon and Alibaba have the potential

to make a dent in PayPal’s strong market position.

‘ Indian government is going to allow 51% FDI in multi-brand online retail and

100% FDI in single brand online retail sooner or later. So, this means foreign

companies can come and start their own online retail companies.

‘ There are very less barriers to entry like less amount of money required to start a

business, less amount of infrastructure required to start business. All you need is

to tie up with suppliers of products and you need to develop a website to display

A Study on Indian E-Commerce Industry

29

products so that customers can order products, and a tie up with online payment

gateway provider like bill desk.

‘ Industry is also going to grow at a rapid rate. It is going to touch 76 billion $ by

2021. Industry is going to experience an exponential growth rate. So, obviously no

one wants to miss this big opportunity.

4.1.5 Threat of substitute products

The threat of substitutes for ecommerce is high. The unique characteristic Amazon has is

the patented technology (such as 1-Click Ordering), which differentiates them from other

possible substitutes. However, there are many alternatives providing the same products

and services, which could reduce Amazon’s competitive advantage. Therefore, Amazon

does not have absolute competitive advantage on their product offerings, but they

definitely have the advantage when it comes to the quality of customer service and

convenience provided.

There is no technology that can substitute the Internet so far in the market. Even, analog

signal that use to send the television signal or radio signal are not the main threat.

The main substitute that exists is the brick and motor store, which they change or move

their place to be on the Internet. Therefore, the e-commerce industry has low threat of

substitutions. When we compare relative quality, relative price of product that he/she

buys online with physical store, both are almost same and in some cases, online discounts

will be available which makes customers to buy products online.

A Study on Indian E-Commerce Industry

30

Porters 5 Force Model of E-Commerce Industry

A Study on Indian E-Commerce Industry

31

CHAPTER 5

MACRO ENVIORNMENTAL

ANALYSIS

A Study on Indian E-Commerce Industry

32

PESTLE ANALYSIS

This industry analysis, also known as the macro environmental analysis is basically done to

determine the conditions in which the industry is operating in. The analysis of these factors

becomes very important for a company operating in that particular industry for the growth

and sustenance.

5.1 POLITICAL AND LEGAL FACTORS:

E-commerce has introduced many changes in the Indian consumers and customers. However,

e-commerce in India has also given rise to many disputes by the consumers purchasing the

products from e-commerce websites. In fact, many e-commerce websites are not following

Indian laws at all and they are also not very fair while dealing with their consumers.

Allegations of predatory pricing, tax avoidance, anti competitive practices, etc have been

levelled against big e-commerce players of India. As a result, disputes are common in India

that is not satisfactorily redressed. This reduces the confidence in the e-commerce segment

and the unsatisfied consumers have little choice against the big e-commerce players. At a

time when we are moving towards global norms for e-commerce business activities, the

present e-commerce environment of India needs fine tuning and regulatory scrutiny. In fact,

India is exploring the possibility of regulation of e-commerce through either Telecom

Regulatory Authority of India (TRAI) or through different Ministries/Departments of Central

Government in a collective manner.

It is obvious that e-commerce related issues are not easy to manage. E-commerce disputes

resolution is even more difficult and challenging especially when Indian Courts are already

overburdened with court cases. Of course, establishment of e-courts in India and use of online

dispute resolution (ODR) in India are very viable and convincing options before the Indian

Government.

Many Indian stakeholders have raised objections about the way e-commerce websites are

operating in India. These websites are providing deep discounts that have been labeled as

predatory by offline traders and businesses. Further, Myntra, Flipkart, Amazon, Uber, etc

have already been questioned by the regulatory authorities of India for violating Indian laws.

A Study on Indian E-Commerce Industry

33

5.1.1 NEED FOR HUMORIZED TAXATION LAWS:

Laws regulating ecommerce in India are still evolving and lack clarity. Favorable regulatory

environment would be key towards unleashing the potential of ecommerce and help in

efficiency in operations, creation of jobs, growth of the industry, and investments in back-end

infrastructure. Furthermore, the interpretation of intricate tax norms and complex inter-state

taxation rules make ecommerce operations difficult to manage and to stay compliant to the

laws. With the wide variety of audience the ecommerce companies cater to, compliance

becomes a serious concern. Companies will need to have strong anti-corruption programs for

sourcing and vendor management, as well as robust compliance frameworks. It is important

for the E-Commerce companies to keep a check at every stage and adhere to the relevant

laws, so as to avoid fines.Myntra, Flipkart and many more e-commerce websites are under

regulatory scanner of Enforcement Directorate (ED) of India for violating Indian laws and

policies. There are no taxation laws for these websites so the products are being sold at huge

discounts in these sites. Some of the major causes not keeping taxation laws over ecommerce

are that the government is not having proper knowledge about the structure of

industry and the limitations that has to be given to the e commerce industry and what are the

rights that should be given to them for selling the product Security of the information

provided during the online transaction is a major concern. Under section 43A of the I T Act

the ‘Reasonable practices and procedures and sensitive personal data or information Rules,

2011’ have been proposed, which provide a framework for the protection of data in India

A Study on Indian E-Commerce Industry

34

5.2 ECONOMIC FACTORS:

Mass usage of internet:

The usage of internet is increasing rapidly in INDIA, it is said that India is said to be the 3rd

largest internet population country after USA and CHINA. Current India internet users are

205 millions .The total no of users are expected to be 330-370 millions in just 3 years. The

internet usage in cities has been increased rapidly.

Increased aspiration levels and availability

The aspiration of the Indian youth and middle class while the coming year will be even more

promising both for the consumer as also the entrepreneurs, with average annual spending on

online purchases projected to increase by 67 per cent to Rs 10,000 from Rs 6,000 per person.

In 2014, about 40 million consumers purchased something online and number is expected to

grow to 65 million by 2015 with better infrastructure in terms of logistics, broadband and

Internet ready devices will be fuelling the demand in ecommerce. The smart phones and

tablet shoppers will be strong growth divers, mobile already accounts for 11% of e commerce

sales, and its share will jump to 25% by 2017. Computer and consumer’s electronics as well

as app

eral and accessories account for the bulk of India retail ecommerce sales will contribute 42%

of its sales.

A Study on Indian E-Commerce Industry

35

Liberal policies (FDI in Retail and Insurance)

The E-commerce Association of India (ECAI) is looking for a positive response from the

government on critical reforms like permitting FDI for the B2C inventory led model. This has

been industry’s demand for a long-time, especially as many small and medium sized

ecommerce players face obstacles on easy access to capital and technology. The industry has

been hoping that the government would at least review a partial opening of the sector to FDI.

In 2015 budget the FDI has been increased to 51 %. So there is a positive response from the

international e commerce industry that is trying to enter into India. Even allibaba.com is also

trying to enter into India with the help of snapdeal.com. But on the other side the small e

commerce companies are finding it difficult because may face difficulty in the future when

the MNC e commerce companies enter India.

Supply chain and the productivity growth

The most important impact of ecommerce is maintained supply chain and the productivity of

the products. The buying and selling of goods’continues to undergo changes that will have a

profound impact on the way companies manage their supply chains. Simply put, e-commerce

has altered the practice, timing, and technology of business-to-business and business-toconsumer

commerce. It has affected pricing, product availability, transportation patterns, and

consumer behavior in developed economy in India. Business-to-business electronic

commerce accounts for the vast majority of total e-commerce sales and plays a leading role in

supply chain network. In 2014, approximately 21 percent of manufacturing sales and 14.6

percent of wholesales are done in India.

A Study on Indian E-Commerce Industry

36

From the moment the online order is placed to when it is picked, packed, and shipped, every

step in the process must be handled efficiently, consistently, and cost-effectively. In ecommerce,

the distribution center provides much of the customer experience. Simply

delivering the goods is no longer an adequate mission for the fulfillment center’customer

satisfaction has to be a critical priority. The typical e-commerce consumer expects a wide

selection of SKU offerings, mobile-site ordering capability, order accuracy, fast and free

delivery, and free returns. Understanding how online consumers shop and purchase across

channels is critical to the success of online fulfillment. More consumers are browsing the

Internet for features and selection, testing products at brick-and-mortar stores, acquiring

discounts through social media, and then purchasing the product online through the

convenience of their mobile device. Some retailers,’including those that also sell through

catalogs’have been in the direct-to-consumer marketplace for some time. These companies

have fulfillment facilities established and information technologies in place to manage orders

with speed and efficiency, doing it well and profitably. But to many distribution executives,

online fulfillment poses a significant challenge to their existing knowledge, experience, and

resources.

5.3 SOCIAL FACTORS:

A Study on Indian E-Commerce Industry

37

Better comfort level and trust in online shopping:

‘ Consumers feel easy to access the ecommerce sites 24×7 support. Customer can do

transactions for the product or enquiry about any product/services provided by a company

anytime, anywhere from any location. Here 24×7 refers to 24 hours of each seven days of a

week.

‘ E-Commerce application provides user more options and quicker delivery of products.

‘ E-Commerce application provides user more options to compare and select the cheaper and

better option.

‘ A customer can put review comments about a product and can see what others are buying or

see the review comments of other customers before making a final buy.

‘ E-Commerce provides option of virtual auctions.

‘ Readily available information. A customer can see the relevant detailed information within

seconds rather than waiting for days or weeks.

Advantages to the society:

‘ Customers need not to travel to shop a product thus less traffic on road and low air pollution.

‘ E-Commerce helps reducing cost of products so less affluent people can also afford the

products.

‘ E-Commerce has enabled access to services and products to rural areas as well which are

otherwise not available to them.

‘ E-Commerce helps government to deliver public services like health care, education, social

services at reduced cost and in improved way.

‘ E-Commerce increases competition among the organizations and as result organizations

provides substantial discounts to customers.

A Study on Indian E-Commerce Industry

38

5.4 Technological factors:

Cloud computing in e commerce:

According to analysts, within 10 years’ time 80% of all computer usage worldwide, data

storage and e-commerce will be in the cloud. It is called the third phase of the internet.

During the first phase software and operating systems were combined to create a simple flow

of communication through ‘ for instance ‘ email. The second phase brought the user to the

World Wide Web, where he had access to millions of websites. This increased internet usage

by a factor of one hundred in only 2 years’ time. In the third phase everything is in the cloud,

both data and software.

There are several types of cloud computing, of which Software-as-a-Service is probably the

best-known. The others are Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service

(IaaS).

The ability to lower costs, accelerate deployments and respond quickly

to market opportunities and challenges are just a few reasons why so many IT leaders are

leveraging cloud based e commerce applications. Given the variety of solutions, IT leaders

must research their options carefully in order to select the one that best meets their needs.

Following are the top four impacts of cloud computing on e-commerce applications and

steps IT leaders should take during their evaluation process.

It’s easy for business leaders to focus on the benefits of cloud computing without considering

the time and effort involved in implementing a viable solution. However, whatever cloud

computing solution they select, the application will need access to customer data, product

data, fulfilment systems and other operational systems in order to support e-commerce. Cue

the IT team.

A Study on Indian E-Commerce Industry

39

Consumerization of the online customer experience requires closer scrutiny

of e commerce:

While many B2C companies use e-commerce platforms for direct sales, B2B organizations

are also leveraging them to add transactional capabilities to their informational sites. In

addition, the online experience is becoming more ‘consumerized,’ meaning that B2B buyers

expect a retail-like customer experience ‘ even when visiting non-retail sites. Cloud solution

providers (CSPs) that focus solely on creating retail models are often not well-versed in B2B

requirements which can be more complex. As a result, their offerings don’t include B2B

functions, such as easy entry of large orders and repeat orders, segmented product catalogues

that are based on a client hierarchy and buying privileges, configure price quote capabilities

and extended payment terms. IT leaders have an unprecedented number of CSPs from which

to choose. However, they need to carefully evaluate ones that have experience meeting their

industry-specific needs, whether it’s B2B, B2C, or a combination of both.

Usage of bandwidth for E-commerce:

Transmission capacity of a communication channel is a major barrier for products that

require more graphical and video data. For this e commerce companies need higher

bandwidth than usual. These all depend on the no of the customers visiting websites, the

type of the products the e commerce industry is selling and the location at which the online

users are mostly visiting the website. These all are some of the factors that affect the usage

of bandwidth. Web processing is also some of the key factors that make e commerce

industry to run. Another key factor is: High cost of developing, purchasing new software,

licensing of software, integration into existing systems, costly e business solutions for

optimizations.

Benefits of using cloud computing over E-commerce:

‘ Trust. Cloud computing enables online store owners to use the same platform and use

the same functionality. That means that new features can be made available to

A Study on Indian E-Commerce Industry

40

everyone with a simple modification. Moreover, the maintenance is taken care of

centrally, which means that store owners can rely on a stable platform.

‘ Cost saving. In many cases this is the most important reason for companies to choose

cloud computing. Since companies do not need to purchase hardware or bandwidth,

costs can be decreased by 80%.

‘ Speed. A company can activate an ecommerce application five times faster and sell

directly through a platform that is managed remotely.

‘ Scalability. Cloud computing makes a company more elastic and able to respond to

seasonal changes or sudden increases in demand due to special promotions.

‘ Security. Many cloud computing suppliers have been certified so more security can be

guaranteed to customers.

‘ Data exchange. The explosive growth of cloud ecommerce will lead to more data

exchange between the clouds. Suppliers will offer more and more possibilities to add

features to their clouds for users, partners and others.

A Study on Indian E-Commerce Industry

41

CHAPTER 6

PERFORMANCE ANALYSIS

A Study on Indian E-Commerce Industry

42

6. FIRMS UNDER STUDY

‘ Flipkart

‘ Snapdeal

‘ Amazon

Four key metrics have been used to evaluate the performance of

E-Retailers in India.

1. Gross Margin

2. Subscriber Growth Rate

3. Average Order Size:

4. Percentage of Mobile Visits

6.1 Gross Margin (Financial year 2013-2014)

Online shopping in India is growing at a very fast clip. At the same time, there is an intense

competition in ecommerce space, especially among the top 3 players. There is aggressive

pricing and discounts are being paid by Venture Capitalists’ pockets.

Flipkart, Amazon and Snapdeal, all of them have raised investments or have commitments of

$1 Billion or more. This money is being burned to acquire new customers, offer discounts

and pump up products on offers. At the same time these sites are losing money, the quantum

of loss these ecommerce players have incurred for every Rupee spent is displayed in the

below figure.

A Study on Indian E-Commerce Industry

43

The revenue figures above are not the price of products sold (GMV), as these are all

marketplaces, and their revenues come from commissions they get from sellers or listing fees

that they charge to list the products on their site.

GMV or Gross Merchandize Value represents the price of products sold and net revenues are

just a fraction of that.

Flipkart leads the race with net revenue of 179 crore followed by Amazon at 168.9 crore and

Snapdeal at 154.11 crore.

However, when it comes to losses, Flipkart leads by a much bigger margin and their loss for

2013-14 stands at Rs. 400 Crore. Comparatively, Amazon losses are pegged at Rs. 321.3

crore and Snapdeal had least losses of 3 with 264.6 crore.

The figure below shows the loss each player incurs for every rupee in net revenues.

A Study on Indian E-Commerce Industry

44

Flipkart leads the race to losing 2.23 rupees for every 1 rupee of revenue. Amazon loses 1.90

and Snapdeal has least amount of losses at Rs. 1.72.

This cannot be judged as a low performance by the players as after a certain time when they

gain major part of market. Every product they sell would be a profit for them.

6.2 Subscriber Growth Rate

Flipkart was founded in the year 2007. By the end of year 2013 they have acquired 22

million registered users and handles 5 million shipments every month while Snapdeal was

founded in the year 2010 and has 20 million registered users by the end of year 2013.

Snapdeal has acquired customers in a quicker pace when compared to Flipkart, but this

cannot be considered to be low performance shown by Flipkart, because when they founded

e- retail in India, the people were not much familiar to e-commerce and online purchasing at

that period.

6.3 Average ordering size of Flipkart for Financial year 2011-2012

Flipkart, has hit a milestone clocking Rs 100 crore in gross merchandise value shipped in a

month for the first time in June 2012. The jump is from an average of around Rs 42 crore

financial year 2010-2011. Flipkart had clocked Rs 500 crore for the 12 months ended March

31, 2012.

In the year 2012 the number of daily orders has hit 25,000 mark (or seventeen orders per

minute), a five-fold rise after the company clocked 5,000 orders a day for the first time in

May 2011. Flipkart first clocked 1,000 orders a day in March 2010.

Average order size= Total Revenue/Number of Orders

=Rs.5000000000 / 25,000 x 365

=Rs.548

A Study on Indian E-Commerce Industry

45

6.4 Percentage of Mobile Visits

Mobile is now one of our most strategic channels for driving revenue and customer acquisition. The eretailers

are investing to build strong technology and marketing platforms that will allow them to

accelerate their growth on mobile

Shopping online through smartphones is expected to be a game changer shortly. In the year 2014

there were nearly 123 million smartphone users in India. Affordability of smartphones is

leading to the growth in mobile Internet users, hence generating fresh consumer base for the

online players. Mobile Internet users in India are estimated to be 120 million compared to 100 million

users using Internet on their personal computers.

Snapdeals

60 percent of Snapdeals orders are coming over mobile in the end of year 2014. It is growing really fast.

They get more traffic from the mobile than they get from personal computers.

Flipkart

Flipkart a year ago, less than 10 percent of their orders, transactions and visits used to come from mobile

commerce.

Now those numbers are greater than 50 percent for them. It is accelerating at a very rapid pace. Flipkart

is seeing more than 2 times or 3 times growth from the mobile front compared to desktop,

where Flipkart is growing overall but mobile is growing at a much faster pace.

A Study on Indian E-Commerce Industry

46

6.5 Top 10 Indian E-Commerce Sites Traffic Comparison

Stats have been taken for month of April 2014

Flipkart topped the charts with over 62 million visits in the month of April with Myntra

coming shade lower at 59.5 million. Given that both of them have now come together, purely

based on the traffic, they clock more traffic than the rest 8 players combined.

While we expected either Amazon or Snapdeal to grab the 3rd place in terms of traffic,

Jabong took the bronze position with 42.5 million visitors followed by Snapdeal (31.4

million).

Amazon.in clocked a respectable 27.6 million visits in month of April (Remember, they have

not even completed a year since launch as yet). Also, if you combine Junglee, which is

owned by Amazon, their traffic bulges to close to 40 million visits.

Infibeam and Tradus both have not been doing too well in terms of traffic. They clocked 3.4

million & 3 million respectively. Also, according to similar web, their traffic has been

steadily dropping over last 6 months. Both of them had close to 5 million visits at the start of

the year.

A Study on Indian E-Commerce Industry

47

Stats have been taken for month of April 2014

When it came to user engagement, Flipkart again reigned supreme with each visitor spending

an average of 8:35 minutes per visit. Ebay also had very high levels of engagement with 8:15

mins followed by Snapdeal (7:49 mins).

Myntra had surprisingly low (in fact lowest of all) visitor time spent at 3:04 minutes. Junglee

and Jabong were other two sites who had low visitor time spent.

Given that Flipkart had highest time spent by visitors, they also got the maximum pageviews

per visitor (8.53) followed by Ebay (8.04). Surprisingly, Tradus did quite well in terms of

page views with a average of 7.57 views per visit.

6.6 Conclusion

Ecommerce industry in India is in their blooming stage now. E-commerce including online

retail in India constitutes a small fraction of total sales, but is set to grow to a substantial

amount owing to a lot of factors such as rising disposable incomes, rapid urbanization,

increasing adoption and penetration of technology such as the internet and mobiles, rising

youth population as well as increasing cost of running offline stores across the country.

A Study on Indian E-Commerce Industry

48

REFERENCES

‘ https://www.drivingbusinessonline.com.au/articles/5-examples-of-great-e-commercesites/

‘ http://www.jeffbullas.com/2009/09/01/5-case-studies-on-companies-that-win-atsocial-

media-and-ecommerce/

‘ http://www.atuljain7.com/consumer-centric-e-commerce-business-models-in-india

MACRO ENVIONMENTAL ANALYSIS

‘ https://www.academia.edu/3832983/Cloud_Computing_and_E-commerce

‘ http://www.bertramwelink.com/index.php/cloud-computing-taking-over-ecommercemarket/

‘ http://www.maaspros.com/blog/much-bandwidth-need-ecommerce-website

‘ http://www.netmagicsolutions.com/resources/case-study-flipkart

‘ http://www.shopify.in/tour/ecommerce-hosting

‘ http://www.tutorialspoint.com/e_commerce/e_commerce_advantages.htm

‘ http://www.supplychainquarterly.com/columns/scq201102monetarymatters/

‘ http://www.inboundlogistics.com/cms/article/maximizing-productivity-in-e-commercewarehousing-

and-distribution-operations/

‘ http://www.business-standard.com/article/news-cm/e-commerce-to-fire-consumeraspiration-

higher-in-2015-assocham-pwc-study-114122900231_1.html

‘ http://indiamicrofinance.com/ecommerce-business-india-2014-2015-report-pdf.html

‘ http://www.studymode.com/subjects/political-issues-in-e-commerce-page1.html

‘ http://ecommercelawsinindia.blogspot.in/

COMPITITIVE ANALYSIS

‘ http://www.entrepreneurial-insights.com/threat-of-new-entrants-porters-five-forcesmodel/

‘ http://www.forbes.com/sites/greatspeculations/2014/11/24/ebay-through-the-lens-ofporters-

five-forces/

A Study on Indian E-Commerce Industry

49

MARKET SHARE VALUE

‘ http://www.business-standard.com/article/companies/jabongs-revenue-rose-50-times-infy13-

114112001047_1.html

‘ http://www.business-standard.com/article/companies/snapdeal-raises-100-mn-eyes-1bnrevenue-

this-year-114052100665_1.html

‘ https://www.google.co.in/search?q=naaptol+sales+revenue&rlz=1C1GGGE_enIN618IN618&

oq=naaptol+sales+revenue&aqs=chrome..69i57.8943j0j4

‘ http://articles.economictimes.indiatimes.com/2014-12-16/news/57112180_1_amazonindia-

ebay-india-latif-nathani

PERFORMANCE ANALYSIS

‘ http://blog.bigcommerce.com/7-key-ecommerce-metrics/

‘ http://trak.in/tags/business/2014/11/06/flipkart-amazon-snapdeal-revenues-lossescomparison/

‘ http://www.flipkart.com/

‘ http://www.snapdeal.com/

‘ http://trak.in/tags/business/2014/06/04/top-10-indian-e-commerce-sites-comparison/

‘ http://techcircle.vccircle.com/2012/07/03/excl-flipkart-hits-rs-100cr-monthly-sales-marknow-

serving-seven-orders-per-minute/

‘ http://www.medianama.com/2014/05/223-snapdeal-mobile-transactions/

‘ http://www.iamwire.com/2014/12/myntra-set-mobile-only-company-2015/107014

‘ http://gadgets.ndtv.com/mobiles/news/m-commerce-to-contribute-up-to-70-percentof-

online-shopping-experts-628106

GLOBAL AND INDIAN SCENARIO

‘ http://dipp.nic.in/English/Discuss_paper/Discussion_paper_ecommerce_07012014.pdf

‘ http://www.pwc.in/assets/pdfs/publications/2014/evolution-of-e-commerce-in-india.pdf

‘ http://www.iamwire.com/2015/01/e-commerce-vs-indian-e-commerce-identifying-missingpieces/

108066

Electronic banking

DEFINITION OF ELECTRONIC BANKING

The term electronic banking means all day access to cash through an Automated teller machine or direct deposit of pay checks into checking or savings account. But electronic banking involves many different types of transactions, rights, responsibilities and sometimes, fees. Electronic banking can also be defined in a very simple form, it can mean provision of information or services by a bank to its customers, via a computer, television, telephone, or mobile phone.

ORIGIN OF ELECTRONIC BANKING IN NIGERIA

During the Structural Adjustment Programme (SAP) in 1986, in Babangida regime brought to an end the kind of banking services rendered by the first generation of banks in Nigeria. The SAP changed the content of banking business. Just as the number of banks increases from 40 in 1985 to 125 in 1991, the SAP provided licence to more banks which posed more threat to existing ones and the more aggressive the marketing techniques adopted by them. In the process competition among themselves, the adoption of electronic banking was put in place in order to maintain a good competitive position.

EVOLUTION OF ELECTRONIC BANKING IN NIGERIA

Banking as come from a very long way from the periods of ledger card and other manual filling system to a period of computer age. Computerization in the banking industry was first introduced in the 1970s by Society General Bank (Nigeria) Limited. Until the mid-1990, the few banks that were computerised made use of the Local Area Network (LAN) within the banks. The sophisticated ones among the banks then implemented the WAN by linking branches within cities while one or two implemented intercity connectivity using leased lines (Salawu and Salawu, 2007).

Banks have adopted technology to their operations and have advanced from very simple and basic retail operations of deposits and cash withdrawal as well as cheque processing to the delivery of sophisticated products which came as a result of keen competition in view of unprecedented increases in the number of banks and branches. There was the need to modernize banking operation in the face of increased market pressure and customers demand for improved service delivery and convenience. According to Sanusi (2002) as cited by Dogarawa (2005). The introduction of e-banking (e-payment) products in Nigeria commenced in 1996 when the CBN granted All States Trust Bank approval to introduce a closed system electronic purse. It was followed in February 1997, with the introduction of similar product called ‘Pay card’, by Diamond Bank.

CBN additionally gave permission to a number of banks to introduce international money transfer products, on-line banking via the internet, and telephone banking though on a limited scale. It must also be stated that the deployment of Automated teller machine (ATM) by some banks to facilitate card usage and enhance their service delivery. Today, nearly all banks in Nigeria make use of a website. The service or ordering bank drafts or certified cheque made payable to third parties has also been increasingly automated (Irechukwu, 2000).

CHANNELS OF ELECTRONIC BANKING PRODUCT IN NIGERIA

The revolution in the Nigerian banking system which led to the increase in paid up capital from N2 billion to N25 billion effective from 1st of January 2006. This result to liquidation of weak banks in Nigeria that could not find merger partners. The revolution brought about changes to banking operations in Nigeria with aggressive competition among various banks. Each banks came up with new products, repackaged the old ones and came up with more efficient service delivery strategies. This more efficient service delivery was made possible through investment in information and communication technology (ICT) (Sanni, 2009). The huge investment in ICT has been the backbone of electronic banking, using different distribution channels. It should be noted that electronic banking is not just banking via the Internet. The term electronic banking can be described in many ways.

PC BANKING

Personal Computer used by customers allows the customer to use all e-banking facility at home without them going to the bank. It gives consumers a variety of services so they can move be able to move money between accounts, pay their bills, check their account balances, and buy and sell goods.

MOBILE BANKING

Mobile phones are used a lot for financial services in Nigeria. Banks enable their customers to conduct banking services such as account inquiry and funds transfer through the mobile telephone.

AUTOMATED TELLER MACHINE

This is an electronics device provided by the banks which allows bank’s customers to make to withdraw cash and to check their account balances at any time of the day without the need for a human teller. Many ATMs also allow people to transfer money between their bank accounts or even buy postage stamps. To withdraw cash, make deposits, or even transfer funds between accounts, you will insert your ATM card and enter your personal identification number known as pin. Some ATMs impose usage fee on consumer who are not member of their institution. ATMs must allow the customers to be aware of the fee that will be charged provided on the terminal screen or on a sign next to the screen. If one incurred a loss or stolen ATM card, he or she should notify the bank.

SMART CARDS

This is involves conducting of banking transactions through the use of electronic cards such as (Value Card, ATM Card, Debit Card, Credit Card etc.). It makes it easy for bank customers to have access to cash, carry out transfers and make enquiries about their accounts balance without them visiting the banking hall.

(i) Credit cards: These are cards that are plastic in nature encoded with electromagnetic identification. Credit cards allow the holders of the cards to make purchases without any immediate cash payment. Credit limit is fixed by the issuing banks based on the financial history of the user.

(ii) Debit card: When compare with credit cards is an instrument which enables an immediate charge or debt into cardholders account on the sales of goods and services made to him or her in other words the holder is using the balance standing in is deposit account

POINT OF SALE TERMINAL (POS)

A Point of Sales (POS) Terminal are machines that are used to accept cards for payment of goods and services. POS Terminal allows owners of card to have a real-time online access to funds and information in his or her bank account through the use of debit or cash cards.

TRANSACTION ALERT

Customers carry out debit or credit transactions on their accounts on daily basis and the need to keep track of those transactions prompted the creation of the alert system by the Bank to notify customers of those transactions when it take place. The alert system also notify or reach out to customers when necessary information need to be communicated.

ELECTRONIC DATA INTERCHANGE (EDI)

The transfer of information between organizations in machine readable form.

INTERNET BANKING

Internet banking permits bank customers to perform or conduct transaction on the account from any location across the world such as making enquiries, bills payment etc. with speedy respond through the web and email system online.

ELECTRONIC CHEQUE

This allows users of the internet to pay their bills directly over the internet without having to send the paper cheque.

BENEFIT OF ELECTRONIC BANKING

Electronic banking is important to both customers and banks in various ways

To banks;

(1) Improve customer service: Electronic banking allow banks to provide new, faster and better service to its customers, thereby, bringing up the banks to international standards and enhancing competition among other banks.

(2) Reliability of transaction: when transaction are done manually, error is prone to happen, but Electronic banking helps to ensure accurate and timely transactions.

(3) Satisfy: Electronic banking ensures the safety of bank dealing with their customers. Unsafe banking practice can cause huge loses to the bank especially in the cause of misrepresentation of account owners.

(4) Reduction in workload: Due to introduction in electronic banking the workload of the bank as reduce as more people conduct their various transactions electronically rather than them coming to the bank.

(5) Information: Electronic banking makes it easy for banks to convey information about their service to customers through the use of internet, banks can also easily communicate or send information to customers through the use of Electronic mail.

These are some of the services provided by banks to customers, they can also provide statement of account easily to their customers, by sending it to their e-mails, this will make it more comfortable for the customers.

To customer;

Electronic banking provide various benefit to customers such as;

(1) Availability of cash: Electronic banking makes it easy for customers to easily get cash any time they need it from their account, this is possible through the use of Automated Teller Machine (ATM)

(2) Stress Relieve: Since transaction can be done anywhere through the use of electronic banking, this will make customers comfortable.

(3) Payment of bills: It is easy for customers to pay their bills such as PHCN bills (Power holding company of Nigeria) Payment for DSTV card when it as expire. This is possible because banks as provide various means for this payment to occur such as Quick teller etc.

(4) Access to Information: Bank customers can easily get information from their banks about the provision of new product or about a problem that as occur.

(5) For Consumers: Increase convenience for customers, more service options for customers, reduced risk of cash-related crimes, cheaper access to banking services and access to credit.

REASONS FOR AUTOMATION OF BANKING OPERATION

According to Idowu (2005), the following are the reasons for adoption of e- banking in Nigeria; (a) to the bank

(1) Facilitation for easy decision making

(2) Availability of quality information

(3) Improve in service delivery

(4) Development of new product

(5) Savings in space and running costs

(6) Relevance among league of global financial institution.

(b) To the customer

(1) The quality of service they enjoy

(2) Reduction in time being spent at banking halls

(3) Confidentiality

(4) Statement of account obtain easily

(5) 24 hours service delivery.

(6) Customer account could be accessed almost anywhere in the world

(c) To the economy

(1) Creation of jobs

(2) Improvement in commerce

(3) Development in technology

(4) Data bank for National planning

CHALLENGES OF E-BANKING IN NIGERIA

Some of the problems facing electronic banking in Nigeria are;

(1) MONEY LAUNDERING: Money laundering can be defined as a derivation of washy money from illicit activities especially drugs trafficking, advance fee fraud and other forms illegal activities. Development in Electronic banking makes it possible to transact business electronically which can be used to launder money.

(2) FRAUD: Fraud which literally means a conscious and deliberate action by a person or group of persons with the intention of altering the truth or fact for selfish personal gain. The high exposure of the system to fraudsters, and other criminally minded persons who could access confidential information from the system if security measures are weak to check personal files is a challenge of electronic banking.

(3) CONSUMER PROTECTION: Another problem of electronic banking is the absence of regulatory body to protect the consumers of the product or services.

(4) SYSTEMS OPERATIONAL RISKS: Bank rest on the use of electronic banking to conduct business which could result to system failure.

(5) POOR NETWORK: Bad network is a major challenge facing electronic banking in Nigeria, poor network can lead to inability to withdraw money from the Automated Teller Machine (ATM), inability to send alert to the customer if money has been deposited into is account or if money has been deducted from is account.

(6) LITERACY ISSUES: This can be refer to as a situation when all targeted people are not educated, while some do not know how to make use of electronic banking. For instance, a dubious businessman may see a customer finding it difficult to operate the POS (POINT OF SALE) terminal and decided to deduct more than what the person consume.

THREATS OF CYBER ‘ CRIMES ON THE NIGERIAN BANKING PREMISES

Fraudsters or 419, which is one of the most popular of all internet frauds in Nigeria, Has its origin from Nigeria in the 1980s. Its development and spread started through the developments in information technology at inception. Later in the early 1990s, it became integrated into the telecommunication facilities such as fax and telephone from the late 1990s following the introduction of internet and computers, 419 crimes became prevalently perpetrated through the use of e-mail and other internet means (Amedu, 2005). The latest dimension taken by this fraudsters is the use of fake internet bank site, and using it to encourage victims to open accounts with them. These issues basically causes problem to electronic banking, which includes confidentiality, integrity and availability.

Several factors are responsible for the above situation. They include weakness of the judicial institutions to make and enforce laws on cyber-crimes; inordinate tolerance for corruption among Nigerian public and government agencies; unemployment among graduates, and the gap between the rich and the poor caused mainly by bad governance. In the main, erosion of good value principles and corruption constitute the greatest cause of rising cyber-crimes among Nigerian (Domestic electronic payment in Nigeria) (Amedu, 2005).

CUSTOMER SATISFACTION

Jamal (2003) defined customer satisfaction as the meeting of one’s expectations relating to the product used by the customer; these are sentiments and feelings about the product used by the customer. Previous studies (Schultz and Good, 2000; Churchill and Surprenant, 1982; and Patterson, (1993) they agreed that the service performance has direct impact on customer satisfaction. They believed that interaction between them and the customer plays a key role in organizational success or failure and customer satisfaction is a critical performance indicator. File and Prince, (1992) according to File and Prince they explained that satisfied customers will be loyal to the organization and they will tell others about their favourable experience thereby leading to positive word of mouth advertising. Sahereh et al, (2013) identified ten (10) factors influencing satisfaction as follows:

(1) Properly behaviour with friendly: Being polite and friendly to customers will definitely generate more customers and will make customer relationship with bank strong. Friendly service is a necessary condition for development of activities and impress a good name about the bank.

(2) To speed in delivery of services: Anything that leads to customer satisfaction will help them to reach their goal earlier.

(3) Accuracy in providing services: This factor wants to minimize in error rate of doing things and improving quality of work to the standards and acceptable by the people so as to result to the trust and confidence of customers and increasing their satisfaction.

(4) Standard-Oriented: If customers have to ensure that the relationship does not rule and providing facilities request them is done based on standard and criteria, trust isn’t deprive and will not lead to their disappointment.

(5) Interest of deposits: Without doubt depositors are attending to the actual interest that should be considered inflation and other costs carefully.

(6) Secrecy: The banks customers expect that bank personnel in maintaining statements of accounts function or other financial issues do not disclose their account to anybody even their closest relatives.

(7) Skills of personnel: based on the researches done the necessary conditions for employment post include: The ability to move, speed in the work, balancing, and the ability of such.

(8) Guiding and presenting the necessary information and helpful: The right guidance on how to use customers from service will result to the speed of work and customer satisfaction.

(9) Discipline: Discipline is a very important features in all aspect of human life. Discipline led to focusing on the work and higher level of service delivery.

(10) Ease of access to services: Banks could easily apply to most services, this will result to greater customer satisfaction.

BANK CUSTOMER RELATIONSHIP

Bank customer relationship, is a special contract where a person entrusts valuable items (the customer) with another person (the bank) with the intention that such items shall be retrieved on demand from the keeper by the person who so entrust. The banker is the one entrusted with above mentioned valuable items, while the person who entrust the items a view to retrieving it on demand is called the customer.

The relationship between the bank and customer is based on contract. It is based on certain terms and conditions. For instance, the customer has the right to collect his money on demand personally or by proxy. The banker is under obligation to pay, so long the proxy is duly authorized by the customer. The terms and conditions governing the relationship should not be allowed to be leaked to a third party, particularly by the banker. Also the items kept should not be released to a third party without authorization by the customer.

A key issue here is how to handle the rising level of frauds prevalent in the entire banking system, and how to make the Internet banking fit so well in the banking structure of a country.

GUIDELINES ON ELECTRONIC BANKING IN NIGERIA

TECHNOLOGY AND SECURITY STANDARDS

CBN will monitor the technology acquisitions of banks, and all investments in technology, which exceed 10% of free funds, will henceforth be subject to approval. Where banks use third parties or outsource technology, banks are required to comply with the CBN guidelines.

STANDARDS FOR COMPUTER NETWORKS & INTERNET

(a) Networks used for transmission of financial data must be demonstrated to meet the requirements specified for data confidentiality and integrity.

(b) Banks are required to deploy a proxy type firewall to prevent a direct connection between the banks back end systems and the Internet.

(c) Banks are required to ensure that the implementation of the firewalls addresses the security concerns for which they are deployed.

(d) For dial up services, banks must ensure that the modems do not circumvent the firewalls to prevent direct connection to the bank’s back end system.

(e) External devices such as Automated Teller Machines (ATMs), Personal Computers, (PC’s) at remote branches, kiosks, etc. permanently connected to the bank’s network and passing through the firewall must at the minimum address issues relating to non-repudiation, data integrity and confidentiality. Banks may consider authentication via Media Access Control (MAC) address in addition to other methods.

(f) Banks are required to implement proper physical access controls over all network infrastructures both internal and external.

STANDARDS ON PROTOCOLS

Banks must take additional steps to ensure that whilst the web ensures global access to data enabling real time connectivity to the bank’s back-end systems, adequate measures must be in place to identify and authenticate authorized users while limiting access to data as defined by the Access Control List.

Banks are required to ensure that unnecessary services and ports are disabled.

Standards on Application and System Software

(a) Electronic banking applications must support centralized (bank-wide) operations or branch level automation. It may have a distributed, client server or three tier architecture based on a file system or a Database Management System (DBMS) package. Moreover, the product may run on computer systems of various types ranging from PCs, open systems, to proprietary main frames.

(b) Banks must be mindful of the limitations of communications for server/client-based architecture in an environment where multiple servers may be more appropriate.

(c) Banks must ensure that their banking applications interface with a number of external sources. Banks must ensure that applications deployed can support these external sources (interface specification or other CBN provided interfaces) or provide the option to incorporate these interfaces at a later date.

(d) A schedule of minimum data interchange specifications will be provided by the CBN.

(e) Banks must ensure continued support for their banking application in the event the supplier goes out of business or is unable to provide service. Banks should ensure that at a minimum, the purchase agreement makes provision for this possibility.

(f) The bank’s information system (IS) infrastructure must be properly physically secured. Banks are required to develop policies setting out minimum standards of physical security.

(g) Banks are required to identify an ICT compliance officer whose responsibilities should include compliance with standards contained in these guidelines as well as the bank’s policies on ICT.

(h) Banks should segregate the responsibilities of the Information Technology (IT) security officer / group which deals with information systems security from the IT division, which implements the computer systems

STANDARD ON DELIVERY CHANNELS

Mobile Telephony: Mobile phones are increasingly being used for financial services in Nigeria. Banks are enabling the customers to conduct some banking services such as account inquiry and funds transfer. Therefore the following guidelines apply:

(a) Networks used for transmission of financial data must be demonstrated to meet the requirements specified for data confidentiality, integrity and non- repudiation.

(b) An audit trail of individual transactions must be kept.

Automated Teller Machines (ATM): In addition to guidelines on e-banking in general, the following specific guidelines apply to ATMs:

(a) Networks used for transmission of ATM transactions must be demonstrated to meet the guidelines specified for data confidentiality and integrity.

(b) In view of the demonstrated weaknesses in the magnetic stripe technology, banks should adopt the chip (smart card) technology as the standard, within 5 years. For banks that have not deployed ATMs, the expectation is that chip based ATMs would be deployed. However, in view of the fact that most countries are still in the magnetic stripe conversion process, banks may deploy hybrid (both chip and magnetic stripe) card readers to enable the international cards that are still primarily magnetic stripe to be used on the ATMs.

(c) Banks will be considered liable for fraud arising from card skimming and counterfeiting except where it is proven that the merchant is negligent. However, the cardholder will be liable for frauds arising from PIN misuse.

(d) Banks are encouraged to join shared ATM networks.

(e) Banks are required to display clearly on the ATM machines, the Acceptance Mark of the cards usable on the machine.

(f) All ATMs not located within bank premises must be located in a manner to assure the safety of the customer using the ATM. Appropriate lighting must be available at all times and a mirror may be placed around the ATM to enable the individual using the ATM to determine the locations of persons in their immediate vicinity.

(g) ATMs must be situated in such a manner that passers-by cannot see the key entry of the individual at the ATM directly or using the security devices.

(h) ATMs may not be placed outside buildings unless such ATM is bolted to the floor and surrounded by structures to prevent removal.

(I) Additional precaution must be taken to ensure that any network connectivity from the ATM to the bank or switch are protected to prevent the connection of other devices to the network point.

(j) Non-bank institutions may own ATMs, however such institutions must enter into an agreement with a bank for the processing of all the transactions at the ATM. If an ATM is owned by a non-bank institution, processing banks must ensure that the card readers, as well as, other devices that capture/store information on the ATM do not expose information such as the PIN number or other information that is classified as confidential. The funding (cash in the ATM) and operation of the ATM should be the sole responsibility of the bank. (k) Where the owner of the ATM is a financial institution, such owner of the ATM must also ensure that the card reader as well as other devices that capture information on the ATM does not expose/store information such as the PIN number or other information that is classified as confidential to the owner of the ATM.

(l) ATMs at bank branches should be situated in such a manner as to permit access at reasonable times. Access to these ATMs should be controlled and secured so that customers can safely use them within the hours of operations. Deplorers are to take adequate security steps according to each situation subject to adequate observance of standard security policies.

(m) Banks are encouraged to install cameras at ATM locations. However, such cameras should not be able to record the keystrokes of such customers.

(n) At the minimum, a telephone line should be dedicated for fault reporting, and such a number shall be made known to users to report any incident at the ATM. Such facility must be manned at all times the ATM is operational.

INTERNET BANKING

Banks should put in place procedures for maintaining the bank’s Web site which should ensure the following:-

(a) Only authorized staff should be allowed to update or change information on the Web site.

(b) Updates of critical information should be subject to dual verification (e.g. interest rates)

(c) Web site information and links to other Web sites should be verified for accuracy and functionality.

(d) Management should implement procedures to verify the accuracy and content of any financial planning software, calculators, and other interactive programs available to customers on an Internet Web site or other electronic banking service.

(e) Links to external Web sites should include a disclaimer that the customer is leaving the bank’s site and provide appropriate disclosures, such as noting the extent, if any, of the bank’s liability for transactions or information provided at other sites.

(f) Banks must ensure that the Internet Service Provider (ISP) has implemented a firewall to protect the bank’s Web site where outsourced.

(g) Banks should ensure that installed firewalls are properly configured and institute procedures for continued monitoring and maintenance arrangements are in place.

(h) Banks should ensure that summary-level reports showing web-site usage, transaction volume, system problem logs, and transaction exception reports are made available to the bank by the Web administrator.

LEGAL ISSUES

(a) Banks are obliged not only to establish the identity of their Customers (KYC principle) but also enquire about their integrity and reputation. To this end, accounts should be opened only after proper introduction and physical verification of the identity of the customer.

(b) Digital signature should not be relied on solely as evidence in e-banking transactions, as there is presently no legislation on electronic banking in Nigeria

(c) There is an obligation on banks to maintain secrecy and confidentiality of customer’s accounts. In e-banking scenario, there is the risk of banks not meeting the above obligation. Banks may be exposed to enhanced risk of liability to customers on account of breach of secrecy, denial of service etc. because of hacking /other technological failures. Banks should, therefore, institute adequate risk control measures to manage such risks.

(d) Banks should protect the privacy of the customer’s data by ensuring:

(1) That customer’s personal data are used for the purpose for which they are compiled. (2) Consent of the customer must be sought before the Data is used

(3) Data user may request, free of cost for blocking or rectification of inaccurate data or enforce remedy against breach of confidentiality

(4) Processing of children’s data must have the consent of the parents and there must be verification via regular mail.

(5) Strict criminal and pecuniary sanctions are imposed in the event of default.

(e) In e-banking, there is very little scope for the banks to act on stop payment instructions from the customers. Hence, banks should clearly notify the customers the time frame and the circumstances in which any stop-payment instructions could be accepted.

(f) While recognizing the rights of consumers under the Nigerian Consumer Protection Council Act, which also apply to consumers in banking services generally, banks engaged in e-banking should endeavour to insure themselves against risks of unauthorized transfers from customers account’s, through hacking, denial of services on account of technological failure etc., to adequately insulate themselves from liability to the customers.

(g) Agreements reached between providers and users of e-banking products and services should clearly state the responsibilities and liabilities of all parties involved in the transactions.

12 years a slave: college essay help

According to Drew Faust, author of Culture, Conflict, and Community, There was a slave owner named James Henry Hammond who did not really have any idea of how to control the slaves. He did not know what to do and how to command them. He had been married into it. He began to listen to his friends who had suggested to ‘Be kind to them make them feel an obligation and by all means keep all other Negros away from the place, and make yours stay at home- and raise their church to the ground-keep them from fanaticism for God’s sake and your own.’ So he did just that. He began to tear down churches just so they could assimilate more into the white Churches and hopefully show up by taking their churches away. They began to become less religious for quite some time but then they began to rise up again. They started to act lazy and defiant because of the lack of authority. ‘The slaves, accustomed to a far less rigorous system of management, resented his attempts and tried to undermine his drive for efficiency.’ Because of the disobedience he began to severely punish them. Constantly beat them senseless if they did not follow orders. That was the norm throughout most slave owners. They would casually beat their slaves for disobeying their master and sometimes even for the hell of it. This is evidently clear in 12 years a Slave.

In 12 Years a Slave, an African American man named Solomon Northup is a free man in the North, living in New York with this wife and two children. He is a savvy violinist and is approached by two individuals asking if he wants to perform for a circus they are opening up in Washington which would pay greatly. He agrees but is drugged and sent back to the South under the name Platt, a runaway slave from Georgia. He gets sold to a plantation but is later sent to another. The reason why will be later stated. In this second plantation, his owner John Epps is known as not being the nicest slave owner. He was actually known for being incredibly cruel for those who disobey his order. He interprets the bible in a way saying that if the slaves were to disobeyed their master they would get 100 slashes if necessary. Epps would have the slaves pick cotton. The average pounds picked by slaves were 200 pounds and whoever didn’t meet the average would get whipped. Northup would usually not meet the quota so he would usually take part of those lashings. Epps would lash out at slaves when he didn’t get what he wanted. His wife would also beat one of the slaves because of jealousy towards her.

Now of course most slave owners are not usually that mean when it comes to the way they treat their slaves. According to Faust before Hammond took the course of action to beat the slaves for their disobedience, he began to kind of give slaves some of what they asked for. After he took away their churches and they failed to join the white churches he began to become more lenient and allowed a traveling minister just for slave services. ‘For a number of years he hired itinerant ministers for Sunday afternoon slave services.’ He would also imply a system of reward for those who did well in their task, instead of not getting any gratitude. Of course he would still punish those that failed in their duties but I suppose it is a start. ‘Hammond seemed not so much to master as to manipulate his slaves, offering a system not just of punishments, but pf positive inducements, ranging from contests to single out the most diligent hands, to occasional rituals of rewards for all, such as Christmas holidays; rations of sugar, tobacco, and coffee; or even pipes sent to all adult slaves from Europe when Hammond departed on the Grand Tour’ So as you can see sometimes some slave owners would be kinder to their slaves than most other slave owners.

In 12 Years a Slave, this is evident as well. Northup, during the slave auction gets sold to a plantation owner named William Ford. Ford tries to convince the seller to give sell him the daughter of a woman he was buying just to keep their family together but the man wouldn’t budge after Ford practically begged for her. Once they all arrive to the farm, Northup works and shows his ingenuity by impressing Ford with a waterway that will help transport logs quickly and cheaper. Ford’s carpenter, John Tibeats, said that couldn’t work and when it did he quickly resented Northup for it. One day he began to harass Northup and they both got into a scuffle in which Northup won but Tibeats threatened him. Ford’s overseer Chapin came and told him to stay on the plantation because if he left he would not be able to protect him. Tibeats came back later with two of his friends and began to lynch Northup. That’s when Chapin came and rescued him from the three men and warned them with a gun pointing at them saying that Ford held a mortgage on Platt and if they hung him, he would lose that money. He then told them to leave Northup and run away. Chapin then left Northup on his tip toes just so the noose won’t wring his neck all day until Ford came and rescued him. That night, Ford keeps Northup in the house protecting him and tells him in order to save his life he has given his debt to Epps. This shows how tender and nice some slave owners were compared to some cruel slave owners like Epps.

Elizabeth Keckley

Elizabeth Keckley’s life was an eventful one. Born a slave in Dinwiddie Court-House, Virginia, from slave parents, she did not have it easy, as her early years were crowded with incidents.

She was only four year old when her mistress, Mrs. Burwell delivered a beautiful black-eyed baby, whose care was assigned to Elizabeth, a child herself. This task didn’t seem very hard to her, as she had been educated to serve others and to rely much on herself. If she met Mrs. Burwell’s expectations, it would be her passport into the plantation house, where she could work alongside her mother, who did most of the cooking and sewing in the family. Trying to rock the cradle as hard as she could, she dropped the baby on the floor and immediately panicked, attempting to pick it up with the fire-shovel, until her mistress came into the room and started screaming at her. It was then that she received her first lashing, but that would not be her last punishment. It was the one she would remember most, though.

At seven years old, Elizabeth saw a slave sale for the first time. Her master had just acquired the hogs he needed for the winter and didn’t have enough money for the purchase. To avoid the shame of not being able to pay, he decided to sell one of his slaves, little Joe, the cook’s son. His mom was kept in oblivion, in spite of her suspicions. She was told little Joe was coming back the next morning, but mornings passed and his mother never saw him again.

By the time she was eight, the Burwell family had four daughters and six sons, with a large number of servants. She didn’t see much of her father, as he served a different master, but it was also because they were enable to be together as a family only twice a year, at Christmas and during the Easter holidays. Her mother, Agnes, was thrilled when Mr. Burwell made arrangements for her husband to come live with them, and little Lizzie, as her father used to call her, was ecstatic to finally have her family together. That only lasted until Mr. Burwell came on one fine day bringing with him a letter saying that her father had to leave to the West with his master, where he had decided to relocate. And that was the last time she ever saw her dad.

Another memory that Elizabeth could not shake was the death of one her uncles, another slave of Mr. Burwell’s. After one day, he lost his pair of plough-lines, but Colonel Burwell offered him another, a new one, and told him he would be punished if he lost those too. But a couple weeks later his new pair got stolen and he hung himself for fear of his master’s reaction. It was Lizzie’s mother that found him the next morning, suspended by one of the willow’s solid branches, down by the river. He chose taking his life over the punishment from his master.

Because they didn’t have any slaves of their own, at 14 Lizzie was separated from her mom and given as a chore girl to her master’s oldest son, who lived in Virginia. His wife was a helpless, morbidly sensitive girl, with little parenting skills. Reverend Robert Burwell was earning very little money, so he couldn’t afford to buy Elizabeth, only to benefit from her services thanks to his father. Living with the minister, she had to do the work of three people, and they still didn’t find her trustworthy. By the time she was 18, Elizabeth had grown into a proud, beautiful young woman. It was around that time that the family moved to Hillsboro, in Northern Carolina, where the minister was assigned a church of his own.

Mr. Bingham, the school principal, was an active member of the church and a frequent visitor of the church house. He was a harsh, pitiless man who became the mistress’s tool in punishing Lizzie, as Mrs. Burwell was always looking for vengeance against her for one reason or another. Mr. Burwell was a kind man, but was highly influenced by his wife and took after her behavior fairly often. One night, after Elizabeth had just put the baby to sleep, Mr. Bingham told her to follow him to his office , where she was asked to take her clothes off because he was going to whip her. Then she did something that no slave had ever done. She refused. She dared him to give her a reason or otherwise he would have to force her, which he did. She was too proud to give him the pleasure of seeing her suffer, so she just stood there like a statue, with her lips firmly closed, until it was over. When he finally let her go, she went straight to her master and asked for an explanation, but Mr. Burwell didn’t react in any way, and only told her to leave. When she refused to go, the minister hit her with a chair. Lizzie couldn’t sleep that night, and it wasn’t from the physical pain, but more from the mental torture she had suffered. Her spirit stoically refused this unjust behavior and as much as she tried, she couldn’t forgive those who had it inflicted it upon her.

The next day all she wanted was a kind word from those who had made her suffer, but that didn’t happen. Instead, she continued to be lashed on a regular basis by Mr. Bingham, who convinced Mrs. Burwell it was the right thing to do to cure her pride and stubbornness. Lizzie continued to resist him, more proud and defiant every time, until one day he started crying in front of her, telling her she didn’t deserve it and he couldn’t do it anymore. He even asked Lizzie for forgiveness and from that day on he never hit one of his slaves ever again.

When Mr. Bingham refused to perform his duty anymore, it was Mr. Burwell’s turn to do it, urged by his jealous wife. Elizabeth continued to resist though, and eventually her attitude softened their hearts and they promised to never beat her again, and they kept their promise.

Sadly, this kind of event was not the only thing that caused her pain during her residence at Hillsboro. Because she was consider fair-looking for one of her race, she was abused by a white man for more than four years, when she got pregnant and gave birth to a boy, the only child she ever had. It wasn’t a child that she wanted to have, because of the society that she was part of, as a child of two races would always be frowned upon and she didn’t want him to suffer like she did.

The years passed and many things happened during that time. One of Elizabeth’s old mistress’s daughters, Ann, married Mr. Garland, and Lizzie went to live with them in Virginia, where she was reunited with her mother. The family was poor and couldn’t afford a living in Virginia, so Mr. Garland decided to move away from his home to the banks of Mississippi, in search of better luck. Unfortunately, moving didn’t change anything, and the family still didn’t have the resource needed to make a living. It got to a point where they were considering putting Agnes, her mother, out of service. Lizzie was outraged by the idea that her mom, who was raised in this family and grew up to raise their children years later and love them as her own, would have to go work for strangers. She would have done anything to prevent this from happening. And she did. She convinced Mr. Garland to let find someplace to work to help the family and to keep her mother close to her. It wasn’t hard to find work , and she soon had quite a reputation as a seamstress and dress-maker. All the ladies came to her for dresses and she never lacked clients. She was doing so well that she managed to support a seventeen-member family for almost two and a half years. Around that time, Mr. Keckley, whom she had met earlier in Virginia, and regarded with a little more than friendship, came to St. Louis and proposed to her. She refused at first, saying that she had to think about his offer, but what scared her was the thought of giving birth to another child that would live in slavery. She loved her son enormously, but she always felt it was unfair for the free side of him, the Anglo-Saxon blood that he had, to be silenced by the slave side that he was born with. She wanted him to have the freedom that he deserved. After thinking about it for a long time, he decided to go to Mr. Garland and asked him the price she should pay for her freedom and her son’s. he dismissed her immediately and told her to never say such a thing ever again, but she couldn’t stop thinking about it. With all the respect the had for her master, she went to see him again and asked him what was the price she had to pay for herself and her son to be free.

He finally gave in to her requests and told her that 1200$ was the price of her freedom. This gave her a silver lining to the dark cloud of her life and with a perspective of freedom she agreed to marry Mr. Keckley and start a family with him. But years passed and she couldn’t manage to save that amount of money because her duties with the family were overwhelming and she didn’t leave much time for anything. Also, her husband, Mr. Keckley, proved to be more of a burden than a support for her and the boy. Meanwhile Mr. Garland died and Elizabeth was given to another master, a Mississippi planter, Mr. Burwell, a compassionate man who told her she should be free and he would help with anything she needed to raise the amount of money needed to pay for this freedom.

Several plans were thought through, until Lizzie decided she should go to New York and appeal to people’s generosity to help her carry out her plan. All was set; all she needed now was six men to vouch with their money for her return. She had lots of friends in St. Louis and didn’t think it would be a problem, and she easily gathered the first five signatures. The sixth one was Mr. Farrow, an old friend of hers, and she didn’t think he would refuse her. He didn’t, but he didn’t believe she would came back either. Elizabeth was puzzled that he didn’t believe in her cause, and she couldn’t accept his signature if he really thought it was the final goodbye from her. She went home and started to cry, looking at her ready-to-go trunk and at the luncheon her mother had prepared for her, believing that her dream of freedom was nothing but a dream and she and her son would die slaves, the same way they were born.

And then something happened, something she never expected. Mrs. Le Bourgois, one of her patrons, walked in and changed her world around. She said it wouldn’t be fair for her to beg strangers for money and it was the ones who knew her that should help her. She would give her 200$ from her and mother and she would ask all her friends do help Elizabeth. She was successful and rapidly managed to find the 1200$ Elizabeth needed. And that was it. Lizzie and her sixteen year old son, George, were finally free. Free to go anywhere they wanted. Free to start over and to have the life they always wanted. Free by the laws of men and by the smile of God.

Critical review II: The Dependency theory

Latin American countries have always been exposed to western influence. With its neo-liberalist stance the west encourages Latin America to open up its trade and cooperate with the west. During the 1960s many countries wanted to keep Western influence outside because they were convinced that this would negatively influence their development. A consequence of this attitude was the development of a theory that criticized the western liberalist stance; the dependency theory. Dependency theory criticizes western modernity. This critical stance can be explained by the fact that Latin America is historically exposed to the political, economic, cultural and intellectual influence of the US and its recurrent attempt to diminish US domination (Tickner 2008:736). The region, being part of the non-core, wants their understanding of global realities to be explored. (Tickner 2003b). However, the theory has been influenced by the west and thereby has lost strength. Moreover, its empirical validity has been questioned. For the above mentioned reasons the theory is hardly used anymore. It should not be forgotten, however, that the theory has some qualities which do contribute to the field of international relations. First this essay will discuss the post-colonial argument that dependency theory is not critical enough and gives an explanation why the theory is Eurocentric. Afterwards it will discuss its empirical validity and finally it addresses the theory its contribution to the field of international relations. Dependency theory originated in the 1960s in Latin America. Frank, the leading theorist, argues that due to the capitalist system, developing countries are underdeveloped and development is impossible as long as they remain in the capitalist system (Frank 1969). Opposed to Frank, Cardoso and Faletto argue that development is possible despite structural determination and that the local state has an important role, they call it associated dependent development ((1979:xi). Yet, both Frank and Cardoso and Falleto eventually argue that dependency paths need to be broken by constructing paths toward socialism. The dependency theorists are thus critical of Western liberalism. As mentioned above, dependency theory attempts to criticize western-modernity. Post-colonialism, however, argues that dependency theory is not counter-modernist and not critical enough. Post-colonial theorists, in contrast to many conventional theorists, state that attention to colonial origins is needed to get a better understanding of the expansion of the world order (Seth 2011). Dependency theory does pay attention to its colonial origins. For example, Frank argues that dependency occurred historically through slavery and colonialism and continues today through western dominance of the international trading system, the practices of multinational companies and the LDC’s reliance on Western aid (Frank 1969). However, post-colonialism does criticize the way how dependency theorists address the colonial origins. The first critique stems from the fact that the homogenizing and incorporating world historical scheme of dependency ignores, domesticates, or transcends difference. It does not take into account the differences in histories, cultures and peoples (Said, 1985: 22).The second critique stems from the fact that to get a sufficient understanding of the emergence of the modern international system, it should not be examined how an international society that developed in the West radiated outwards, but rather the ways in which international society was shaped by interactions between Europe and those it colonized. (Seth 2011:174). Pos-colonalists, however, argue that dependency theory is Eurocentric. Dependency theorists are not aware of the way in which culture frames their own analysis. While trying to look at imperialism from the perspective of the periphery, dependistas fail to do this (Kapoor 2002:654). For dependistas the ‘centre’ continues to be central and dominant so that the West ends up being consolidated as sovereign subject. Dependency’s ethnocentrism appears in its historical analysis as well. Dependistas use the way how capitalism developed in Europe as a universal model that stands for history and see developing countries as examples of failed or dependent capitalism. (Kapoor 2002:654) Post colonialism thus would argue that while challenging the current capitalist system, dependency theory is not critical enough because it does not adequately address history and culture and is Eurocentric. Tickner (2008) may provide an explanation of why dependency theory is not critical enough and why it is described in terms of adherence to the capitalist system dominated by the west. IR thinking in Latin America is influenced by -among other things- US intellectual knowledge. As Tickner argues, dependency theory is not a genuine non-core theory but it is influenced by US analysts. According to Cardoso (1977) this led to severe distortions in its original contents because it local internal problems of greatest concern to social scientists in Latin America became invisible and external factors such as US intervention and multinational corporations were prioritized. Furthermore, IR thinking in that region has been influenced by conventional theories. For example, through the influence of realism in Latin America, much attention was paid to the role of the elite and furthermore, theorists were concerned with the concept of power but replaced it with a more suitable concept autonomy (Tickner 2008:742).Thus the influence of US IR knowledge and conventional theories may have contributed to the fact that dependency theory is not critical enough and has lost its influence. Although post-colonialism addresses dependency’s problem – that is does not sufficiently addresses culture because of its sole focus on capitalism – it should be noted that not only culture is not taken into account but a whole range of other factors which could help explain underdevelopment are left out the theory as well. This shortcoming clarifies why dependency theory is empirical invalid.For example, dependency does not address local, physical, social or political forces that might have had a role in the inability to generate industrial development and does not acknowledge that imperialism is only partly responsible for underdevelopment (Smith 1981). As Smith argues; ”dependency theory exaggerates the explanatory power of economic imperialism as a concept to make sense of historical change in the south” (1981:757). Smith rightly points out that in some instances dependistas do recognize the influence of local circumstances but this is only so to reaffirm ultimately the overriding power of economic imperialism. Moreover, the theory does not pay any attention to the positive effects that the contact with the international system can provide for developing countries. Thus, dependistas solely look at economic power and state that as long as countries remain part of the international capitalist system development is impossible. The only way to escape is to isolate from this system or if the colonizer relinquishes political power. (Kapoor 656) However, South-Korea’s development shows the invalidity of this argument. South-Korea experienced rapid growth during the 1970s despite its dependency on the US. The case of South-Korea shows that development is possible despite dependency and that dependency can have positive effects. Another case which shows dependency’s invalidity is Ghana. During the 1980s Ghana adopted dependency policies that were consistent with the denial of the relevance of the western economic principles and tried to keep western influence outside. However, instead of bringing prosperity and greater independence for the Ghanasian economy, these policies caused poverty and greater dependence on international aid or charity (Ahiakpor 1985:13). The dependency policies did not support Ghana’s development because it only focused on capitalism without taking other factors into account. The theory thus has significant shortcomings which explains its loss of influence. However, it should be noted that the theory has some qualities as well. Despite its homogenizing and Eurocentric history, the theory is aware that history is an important factor. The advantage of dependency’s structural-historical perspective is that broad patterns and trends can be recognized. Moreover, it allows one to learn from past mistakes to change the future (Kapoor 2002:660). On top of that, as Wallerstein argues ”One of the crucial insights and contributions of dependency is the conceptualization of the unicity of the world system” (1974:3). It clearly describes how the world is incorporated in the capitalist system and it shows the importance of economic considerations on political issues. Furthermore, it addresses the importance of local elites and foreign companies to the internal affairs of weak states whereby it provides an analytical framework. (Smith 1981:756). For example, Cardoso and Faletto argue that if there is no strong local state, the ruling elite may ally themselves with foreign companies pursuing their own interests and this way upholding dependency and underdevelopment. This analysis is applicable to Congo and shows for example that Congo’s development is restrained because a strong local state is absent and the local elite allies with foreign companies pursuing their own interests (Reno 2006). In short, dependency theory has lost influence because – from a post-colonial perspective – it is not counter-modernist and not critical enough. It does not adequately address history and culture and is Eurocentric. Moreover, it is empirically invalid because it solely focuses on capitalism without taking other factors into account. In contrast to what dependistas argue, dependent development is possible within the capitalist system and the proposed isolation from this system may even worsen the economy instead of bringing prosperity. However, the theory did influence foreign policymakers and analysts not only in Latin America but also in Africa between the 1960s and the 1980s and it is still useful because it provides an analytical framework on which other analysts can elaborate.

Data Acquisition and Analysis – Curve fitting and Data Modelling: college application essay help

1 Part 1: Track B: Linear Fitting Scheme to Find Best-Fit Values

Introduction

In a linear regression problem, a mathematical model is used to examine the relationship between two variables i.e. the independent variables and dependent variable (the independent variable is used to predict the dependent variable).

This is achieved by using the least square method using excel plotting data values and drawing a straight line to derive the best fit values. A nonlinear graph was obtained on plotting the data points provided but the least square method applied obtained a linear graph with a straight line which estimated and minimized the squared difference between the total sum of the data values and model values.

Aim

The aim of this coursework Part1, Track B is to carryout data analysis assessment with respect to linear model fitting by manual calculation working obtaining the best-fit values with the decay transient data provided in table 1 below which is implemented using excel.

Time (sec) Response (v)

0.01 0.812392

0.02 0.618284

0.03 0.425669

0.04 0.328861

0.05 0.260562

0.06 0.18126

0.07 0.1510454

0.08 0.11254

0.09 0.060903

0.1 0.070437

Table 1.1: data for decay transient

Methodology

The data in the table 1.1 above represents decay transient which can be modelled as an exponential function of time as shown below:

V(t)=V_0 exp'(-t/??) ”””””””””””””’ (1.1)

The equation above is nonlinear; to make it linear the natural logarithm method is applied

Logex = Inx ”””””””””””””””. (1.2)

From the mathematical equation of a straight line

Y = mx + c”””””””””””””””. (1.3)

Y = a0 + a1x + e””””””””””””””. (1.4)

Y = Inx”””””””””””””””’… (1.5)

In this case

Y = InV”””””””””””””””’… (1.6)

So,

InV = InV0 + Ine(-t/??) ””””””””””””’. (1.6)

But Inex = x; eInx = x

InV = InV0 ‘ t/”””””””””””””’??. (1.7)

Applying natural logarithm method to the equation obtained two coefficients InV0 and t/?? which represent a0 and a1 respectively from equation (1.4)

The normal equation for a straight line can be written in matrix form as:

[‘(n&’[email protected]’xi&”xi’^2 )] {‘([email protected])} = {‘(‘[email protected]’xiyi)} ”””””””””.””. (1.8)

This is separated to give

[‘(‘(1&1)&’&[email protected]'(xi&x2)&’&xn)] [‘(‘([email protected])&'([email protected])@’&’@1&xn)]{‘([email protected])} = [‘(‘(1&1)&’&[email protected]'(x1&x2)&’&xn)] [‘(‘([email protected])@’@yn)] ””’.””. (1.9)

From (1.9) the general linear least squares fit equation is given as

[[Z^T ] [Z]]{A}= {[Z^T ] {Y}}

The main purpose is to calculate for {A} (InV0 and t/??) which are the coefficients of the linear equation in (1.7). Matrix method using excel is used to achieve this by finding values for [Z],[Z^T ],[Z^T ][Z],[Z^T ][Y],’and[[Z^T ] [Z]]’^(-1).

The table below shows the calculated values for InV when natural logarithm was applied to the response V using excel.

Table 1.2: Excel table showing calculated values for InV

[Z]= [‘(1&[email protected]&[email protected]&[email protected]&[email protected]&[email protected]&[email protected]&[email protected]&[email protected]&[email protected]&0.1) ]

The transpose of [Z] is given as

[Z]= [‘(1&1&1&1&1&1&1&1&1&[email protected]&0.02&0.03&0.04&0.05&0.06&0.07&0.08&0.09&0.1)]

The product of [Z^T ][Z] is given as

[Z^T ][Z] = [‘(10.0000&[email protected]&0.0385)]

The inverse [[Z^T ] [Z]]^(-1) of the matrix [Z^T ][Z] is given as

[[Z^T ] [Z]]^(-1) = [‘(0.4667&[email protected]&121.2121)]

The product of the transpose of [Z], [Z^T ] and [Y] (InV) is given as

[Z^T ][Y] = [‘([email protected])]

To obtain {A} the product of [[Z^T ] [Z]]^(-1) and [Z^T ][Y] was calculated to give

{A} = [‘([email protected])]; {A}= [‘([email protected]/??)]

Where;

InV0 = a0; and – 1/?? = a1

{A} = [‘([email protected])]

So,

InV0 = 0.0626; and 1/?? = -28.8434

V0 = exp(0.0626) = 1.0646; and ?? = 1/-28.8434 = 0.03467

Then,

V(t)=1.0646exp'(-t/0.03467)

Since Y = InV(t)

Y = 1.0646exp'(-t/0.03467)

Table 1.3: Excel table showing calculated values for InV [Y] and InV(t) [Y]

Table 1.4: Diagram of Excel calculation for curve fitting

Figure 1.1: Diagram of transient decay for response V and response V(t)

Figure 1.2: Diagram of transient decay for response V and response V(t)

Conclusion

The solutions to this linear regression exercise was achieved by manually calculating the generalized normal equation for the least square fit using the matrix method and Microsoft excel program obtaining the unknown values of the coefficients V0 and ??. The method provides accurate results and the best fit values obtained show the relationship between the straight line response and the transient line response shown in figure above and is seen to have given

2 Part 2: Track B: Type K thermocouple

INTRODUCTION:

The thermocouple is a sensor that is used to measure temperature and is commonly used in many industries. For this lab work, a Type K thermocouple is used to acquire a first order transient data (non-linear) response to a temperature change, a signal conditioning element called the AD595 thermocouple amplifier is used to improve thermocouple signal since it produced a low voltage output proportional to the input temperature, a data acquisition device the NI-USB 6008 is used to acquire signals from the signal conditioning circuit, a resistor-capacitor (RC) low-pass filter is built for reduce the frequency and noise of the signal generated and further investigations and analysis are carried out. In this part, the non-linear regression was used to obtain the transient response of thermocouple signal using Labview program

Aim

The aim of the assignment is to produce a Labview program which can obtain transient real data values from a Type K thermocouple which is a sensor that produces voltage by the differential temperature its conductors sense (i.e. a first order response) followed by a non- linear model fitting procedure which allows the user capture the thermocouple initial first order response to a rising input temperature and an appropriate response function model to the transient response. The program displays the transient data, fitted model response and calculated model parameters.

Reason of the choice of model response

The model output transient response obtained from the input (temperature) of the Type K thermocouple was a first order can be defined as having only s to the power of one in the first order transfer function which is characterised with no overshoot because the order of any system is determined by the power of s in the transfer function denominator and has a transfer function as 1/(??s+1) for a unit step signal of 1/s in the Laplace transform domain or s domain and is given as

Y(s) = 1/(s(??s+1)) ””””””””””””””’… (2.1)

Partial fraction method is used to find the output signal Y(t) to give

Y(s) = A/s+B/(??s+1) ””””””””””””””’ (2.2)

A = [s * Y(s)] for s = 0 = 1

B = [(??s+1) * Y(s)] for s = (-1)/?? = -1

Y(s) = 1/s-1/(??s+1) ””””””””””””””’ (2.3)

Therefore the output signal in the time domain is given as:

Y(t) = L-1 [1/s-1/(??s+1)] ”””””””””””””’ (2.4)

Y(t) = U(t) – e^((-t)/??) ”””””””””””””’ (2.5)

Y(t) = 1- e^((-t)/??) where t’0 ””””””””””” (2.6)

Substituting the output response V(t) for Y(t) , the equation (2.6) can also be re-written as:

V(t) = 1- e^((-t)/??) ”””””””””””””’ (2.7)

Assuming for a given input temperature T0 an output response V0 was derived and for a given increase in temperature T1 an output response voltage V1 was derived, so therefore the output voltage for the change in voltage relative to the change in temperature is given as:

V(t) =(V1 ‘ V0)( 1- e^((-t)/??) ) ”””””””””””. (2.7)

V(t) =V0 + (V1 ‘ V0)( 1- e^((-t)/??) ) (thermocouple voltage response) ”””’. (2.8)

V(t) = (V1 ‘ V0)( 1- e^((-t)/??) ) + V0 ””””””””””. (2.9)

Where (V1 ‘ V0) = ??V

V(t) = ??V ( 1- e^((-t)/??) ) + V0 ”””””””””””. (3.0)

T(t) = ??T ( 1- e^((-t)/??) ) + T0”””””””””””. (3.1)

T(t) = a ( 1- e^((-t)/b) ) + c

Equation (3.1) is similar to the general nonlinear model given as:

F(x) = a ( 1- e^((-t)/b) ) + c ”…””””””””””’. (3.2)

Where,

F(x) = V(t) ; a = ??V; b = 1/??; and c = V0

The thermocouples voltage output is nonlinear given a first order response curve (Digilent Inc., 2010)

V(t)

0 (-t)/??

Table 2:1: showing thermocouple output voltage first order response curve

Explanation on the principles of non-linear regression analysis

The principle of non-linear regression is a method that can be used to show how the response and the unknown values (predictors) relate to each other by following a functional form i.e. relating Y as being a function of x or more variables. This is to say that the non-linear equation we are trying to predict rely non-linearly mainly upon one or more variables or parameters. The Gauss Newton Method is a method used to solve non-linear regression by applying Taylor series expansion to express a non-linear expression in a linear for form.

A non-linear regression compared to a linear regression cannot be manipulated or solved directly to get the equation; it can be exhausting calculating for this as an iterative approach is used. A non-linear regression model is given as:

Yi = f(xi, a0, a1) + ei

Where,

Yi = responses

F = function of (xi, a0, a1)

ei = errors

For this assignment the non-linear regression model is given as

f(x) = a0(1 ‘ e-a1x) + e

Where,

F(x) = V(t) ; a0 = ??V; b = (-t)/??; and e = V0

T(t) = (T1 ‘ T0)( 1- e^((-t)/??) ) + T0 ””””””””””. (3.3)

V(t) = (V1 ‘ V0)( 1- e^((-t)/??) ) + V0 ””””””””””. (3.4)

Where (V1 ‘ V0) = ??V

V(t) = ??V ( 1- e^((-t)/??) ) + V0 ”””””””””””. (3.5)

Description of measurement task

The intent of the measurement experiment was to seek, identify, analyse the components of that make measurement task and to examine the transient response from the Type k thermocouple. It analyses the method, instruments and series of actions in obtaining results from measurements. Equipment such as the Type K thermocouple, NI-PXI-6070E 12 bit I/O card, AD595 thermocouple amplifier, and a Labview software program which was used for calculation of model parameters were used to carry out this task.

The measurement task is to introduce the Type K thermocouple sensing junction in hot water to analyse the temperature change and the corresponding voltage response is generated and observed on a Labview program. This activity is executed frequently to acquire the best fitted model response and parameters.

Choice of signal conditioning elements

The choice of a signal conditioning element used in measurement is important because the signal conditioning element used can either enhance the quality and efficiency of a measurement system or reduce its performance.

The AD595 thermocouple amplifier is the choice signal conditioning element used with the Type K thermocouple for this experiment because it is has a built in ice-point compensation (cold junction compensator) and amplifier which is used as a reference junction to compare and amplify the output voltage of the Type K thermocouple which generates a small output voltage corresponding to the input temperature. AD595AQ chip used is pre-calibrated by laser trimming to correspond to the Type K thermocouple characteristic feature with an accuracy of ??3oC, operating between (-55 oC – 125 oC) and are available with 14 pins/low cost cerdip. The AD595 device resolves this issue by providing amplification of low output voltage (gain), linearization of the nonlinear output response of the thermocouple so as to change to the equivalent input temperature, and provide cold junction compensation to improve the performance and accuracy of thermocouple measurements.

Equipment provided for measurement

There were three equipment provided for the measurement exercise and they include:

Type K thermocouple

The Type K thermocouple (chromel/alumel) is the most commonly used transducer to measure temperature with an electromotive force (e.m.f) of 41 microvolts per degree(??V/ oC) which is nonlinear and the voltage produced between its two dissimilar alloys changes with temperature i.e. the input temperature corresponds to the output voltage it generates. It is cheap to buy with the ability to perform in rugged environmental conditions and is calibrated to operate at wide temperature range of about -250 oC to 1370 oC. It is made of a constituent called nickel which is magnetic and its magnetic component may change direction or deviate when subjected to a high enough temperature and can affect its accuracy.

Signal connector signal conditioner

Thermocouple

Cold junction ‘([email protected])

Figure 2.2: Circuit diagram of a thermocouple, signal connector and signal conditioner

NI-USB 6008

The NI-USB 6008 is a National Instrument device that provides DAQ functionality for some applications like portable measurements, data logging and lab experiments. It is cheap for academic purposes and is used for more complicated measurement tasks. It has a ready to run data logger software which allows the user to perform quick measurements and can be configured by using Labview National Instruments software. It provides connection to 8 analog single-ended input channels (AI), 2 analog output channels (AO), 12 data input and output (I/O) channels, 32 bit counter bus with a very quick USB interface and are compatible with Labview 7.x, LabWindowsTM/CVI, and Measurement Studio DAQ modules for visual Studio.

Figure 2.3: NI-USB 6008 pin out

AD595 thermocouple amplifier

The AD595 thermocouple amplifier is a thermocouple amplifier and a cold junction compensator on a small chip of semiconductor material (microchip or IC) which produces a high output of 10 mV/oC from the input signal of a thermocouple as a result of combining a cold junction reference with a pre-calibrated amplifier. It has an accuracy of ??10C and ??30C for the A and C performance grade version respectively and can be powered by a supply including +5V and a negative supply if temperatures below 00C are to be measured. It laser trimmer is pre-calibrated so as to conform to the Type k thermocouple specification and is available in 14-pin side brazed ceramic dips (Devices, 1999).

Figure 2.4: Block diagram showing AD595 in a functional circuit

Configuration of the I/O channel(s)

The I/O channel(s) provide a way (path) for communication between the input device (thermocouple sensor) and the output device (DAQ). The thermocouple senses temperature as input and sends the data to the DAQ which receives the data and displays the information through a computer on the Labview front panel graph.

The following explain the configuration of the DAQ for the thermocouple measurement:

Channel Settings: this is used to select an I/O channel in the DAQ device either AI0 or AI1 can be chosen and rename to suit user

Signal Input Range: this is used to select the minimum and maximum voltage of the AD595 thermocouple amplifier which also helps to achieve better resolution of the NI-USB 6008 Data Acquisition Device

Scale Units: this is used to select the scale unit of the analog signal generated and since the thermocouples output signal measured corresponding to temperature is Voltage, then the Scaled unit chosen will be ‘Volts’

Terminal Configuration: this is used to choose terminal on the DAQ for which the signal conditioning circuit is connected

Custom Scaling: No scale was chosen since no custom scale was adopted

Acquisition Mode: this is used to select how the sample are played, the continuous samples was chosen because it allows the DAQ to collect data signals continuously from the circuit until the user decides to stop

Samples to Read: this allows the user to choose how many samples to read depending on the frequency of the input signal generated. It is important to choose at least twice the frequency signal to acquire all the desire signals. One thousand (2K) samples was chosen

Rate (Hz): this allows the user to choose the rate of the sample signals generated. Rate (Hz) 1k was chosen.

Connection of Circuit to DAQ and Configuration of I/O channel(s)

The connection of the circuit to the NI-USB 6008 data acquisition device was carried out by connecting two wires from the output voltage and ground of the signal conditioning unit i.e. the AD595 device.

The red wire from the signal conditioning unit was connected to the positive port of the analog input channel 0 (+AI0) of the DAQ device and the black wire from the ground was connected to the negative port of the analog input channel 0 (-AI0) of the DAQ. The diagrams below show the connections between the signal conditioning circuit and the connector block (DAQ).

Figure 2.5: Picture showing the connection of the signal conditioning circuit with the DAQ

Description of the Labview VI

Labview is a National instrument programming system design software that is optimal for control measurements and provides an engineer with tools to test, solve practical problems in a short amount of time, and the design of control systems. It is less complex and very easy to use compared to other programming simulation applications. The Labview Virtual Instrument (VI) program includes the Front Panel and the Block diagram and for this lab experiment, it is used to examine and determine the thermocouple frequency response and to carry out an analysis of the noise present in the filtered and unfiltered signal of the thermocouple voltage generated, displaying the result on its Graph indicators. The description of the Labview VI Front Panel and Block diagram are as follows:

Figure 2.6: Block diagram of the Labview design

Block diagram:

It is where a user can create the basic code for the Labview program. The program can be created when the block diagram is active by using the functions palette which contains objects like structure, numeric, Boolean, string, array, cluster, time and dialog, file I/O, and advanced functions which can be added to the block diagram.

Front Panel:

It is a graphic user interface that allows the user to interact with the Labview program. It can appear in silver, classic, modern or system style. The controls and indicators are located on the controls palette and used to build and add objects like numeric displays, graphs and charts, Boolean controls and indicators etc. to the front panel.

DAQ Assistant:

It allows a user to configure, generate and acquire analog input signals from any one of the data acquisitions (DAQ) input channel. For this experiment the signal conditioning circuit was connected to the analog input 0 channel of the NI-USB 6008 data acquisition device and the DAQ assistant is used to configure to DAQ so as to be able to acquire signals from the AD595 thermocouple amplifier.

For Loop:

The For loop like the while loop allows codes to be executed repeatedly by executing a subdiagram a required number of times (N). The For loop is found on the structure palette and can be placed on the block diagram. It is made up of the count and iteration terminal.

Trigger and Gate VI:

It is used to take out a part (segment) of a signal and its mode of operation is either based on a start or stop condition or can be static.

Curve Fitting VI:

It is used to calculate and determine the best fit parameter values that best depict an input signal. It can be used for linear, non-linear, spline, quadratic and polynomial models type. It minimizing the weighted mean squared error between the initial and best fit response signal generated. For this experiment initial guesses were made for the coefficients of non-linear model used.

Graphs:

It is a type of special indicator accepts different data types and used to display an array of input data or signals. In this case a waveform graph was used.

Numeric Function:

It is used to carry out mathematical and arithmetic actions on numbers and converts numbers from one data type to another. The multiply numeric function was used to return the products of inputs.

Figure 2.7: Configuration of Trigger and Gate

Figure 2.8: Curve fitting configuration

The diagram in figure 2.8 above is a window showing the configuration for curve fitting and the configuration steps are as follows:

Model Type: Non-linear model was chosen because the signal observed is a first order response (non-linear) curve.

Independent variable: t was the independent variable chosen

Maximum iterations: The default maximum iterations 500 is chosen.

Non-linear model: The equation for the non-linear model a*( 1- exp'(-t/b)) ) + c

Initial guesses: Values for a, b, and c were chosen to get the best fitting values for the curve. The values for a, b, and c are 15.000000, 0.040000, and 29.000000 respectively.

Figure 2.9: Transient response of the thermocouple for best fit 1st measurement

Figure 3.0: Transient response of the thermocouple for best fit 2nd measurement

Figure 3.1: Transient response of the thermocouple for best fit 3rd measurement

The diagrams in figures 2.9, 3.0 and 3.1 above show the transient response of the thermocouple after being inserted in warm water to get the best fit curve i.e. to replicate the actual thermocouple response curve. It is observed that with the use of the Trigger and Gate Express VI, the delay experienced in the in the three graphs were removed making the thermocouples signal response more appropriate providing a more decent best fit curve result. Carrying out multiple measurements to get the best fit curve reduces the time constants and produces a better response curve compared to taking one measurement. The table below shows the results from the three measurement activities with their residual and mean squared error values.

Model Parameters First Measurement(1st) Second Measurement(2nd) Third Measurement(3rd)

a (0C) 21.4671 10.2373 8.60708

b (sec) 0.0232065 0.039578 0.0432934

c (0C) 32.1068 29.661 29.4745

Residual 0.666461 0.0357227 0.124069

Mean Squared Error 0.431833 0.0181012 0.0227711

Table 2.1: Showing results from the three measurements with best fit parameters mean squared error and Residual for curve fitting.

From Table the second measurement was observed to have the best fitting curve with the minimum residual and Mean Squared error that are closest to zero compared to the 1st and 3rd measurements. The best fit parameter results of the second measurement will be inserted in the non-linear model equation which is given as:

y = a*( 1- exp'(-t/b)) ) + c””””””””””””(3.6)

T(t) = ??T ( 1- e^((-t)/??) ) + T0””””””””””””…(3.7)

Where,

a is the change in temperature of thermocouple (??T)

b is the time constant (??)

c is the initial temperature of the thermocouple (T0)

t is the time

Substituting values for a, b, and c of third measurement in the equation

T(t) = 10.2373( 1- exp'(-t/0.039578)) ) + 29.661”’..”””””..(3.8)

Equation 3.7 above is the non-linear model equation with best fit parameters for the thermocouple response signal at every value of time (t).

To achieve the output voltage response, the change in temperature and the initial temperature values from equation 3.7 need to be converted to volts this can be obtained by dividing the temperature value by 100 since 100oC is equal to 1V. So the resulting output voltage of the thermocouple is as follows:

V(t) = ??V ( 1- e^((-t)/??) ) + V0”’..””””””””””'(3.9)

Where,

a = ??V; b =1/??; and c = V0

V(t) = 0.102373( 1- exp'(-t/0.039578)) ) + 0.29661””””””’.(4.0)

Conclusion

The curve fitting experiment using the Type K thermocouple, the AD595 signal conditioning device, and the NI-USB 6008 to acquire and display signals using Labview software was carried out successfully. Curve fitting of the transient response signal of the thermocouple was achieved by analysing and obtaining the transient response of the thermocouple and using a non-linear model implementing best fit values to replicate the response curve by subjecting the thermocouple to temperature. This can be used to obtain the behaviour of an input response signal and improve efficiency for control systems.

References

Cimbala, J. M., 2013. Dynamic System Response. [Online]

Available at: https://www.mne.psu.edu/me345/Lectures/Dynamic_systems.pdf

[Accessed 10 March 2014].

Devices, A., 1999. Analog Devices. [Online]

Available at: http://www.analog.com/media/en/technical-documentation/data-sheets/AD594_595.pdf

[Accessed 2 March 2015].

Digilent Inc., 2010. Introduction to First Order Responses. [Online]

Available at: http://www.digilentinc.com/Classroom/RealAnalog/text/Chapter_2p4p1.pdf

[Accessed 6 March 2014].

here…

Job evaluation

Job evaluation is defined as a method for determining the worth of a job in comparison to the different jobs in the organization. To establish a justified pay structure for all the employees of the organization, job evaluation gives a means to compare the quality of the work in a particular job, in other words, the worth of a job.

It is different from job analysis; rather job evaluation is done after the stage of job analysis in order to obtain some information about the concerned jobs. Job analysis is defined as a process of determining the skills, duties and responsibilities, in a systematic way, required for a particular job. Thus job evaluation is a method which commences from job evaluation from job analysis but it ends at a point where the worth of the job is determined by ensuring internal as well as external pay equity. In this competitive business environment, it is essential to maintain pay equity otherwise the organization may lose its crucial talent.

Equity:

Overpayment Inequity (Positive Equity):

Underpayment Inequity (Negative equity):

Where,

Input: Any value that person brings to a job.

Outcome: any sort of benefit that an employee is awarded from his/her job.

Objectives of job evaluation

‘ To build a systematic, reasonable, deliberate structure of jobs based on their worth to the organization.

‘ To support a current pay rate structure or to build up one that accommodates internal equity.

‘ To support in setting pay rates that are tantamount to those of in comparable jobs in different organizations to contend in market place for best talent.

‘ To give a balanced premise to arranging pay rates when bargaining collectively with a recognized union.

‘ To guarantee the reasonable and fair remuneration of workers in connection to their obligations.

‘ To guarantee equity in pay for jobs of comparable efforts, responsibility, efforts and working conditions by utilizing a framework that reliably and precisely surveys contrasts in relative quality among jobs.

‘ To create a system of techniques to determine the grade levels and the resulting pay range for new jobs or the jobs which have advanced and changed.

‘ To distinguish a ladder of progression for future development to all workers who are interested in enhancing their remuneration.

‘ To comply with equal pay legislation and regulations deciding pay contrasts as indicated by job content.

‘ To add to a base for merit or performance-related pay.

Characteristics of job evaluation

The essential goal of job evaluation is to figure out the value of work; however this is a quality which differs occasionally and from spot to place affected by certain economic pressure. The principle features of job evaluation are:

‘ To supply bases for compensation arrangement established on realities as opposed to on dubious moderate thoughts.

‘ It endeavors to assess jobs, not individuals.

‘ Job evaluation is the yield given by job analysis.

‘ Job evaluation does not plan pay structure, it helps in supporting the framework by decreasing number of separate and diverse rates.

‘ Job evaluation is not made by people rather it is carried out by gathering of specialists.

‘ Job evaluation decides the estimation of job. Further the estimation of each of the perspectives, for example, aptitude and obligation levels are additionally related and concentrated on regarding the job.

‘ Job evaluation helps the administration to keep up abnormal amounts of representative gainfulness and worker fulfillment.

Process of job evaluation

Job analysis describes the skills, duties and responsibilities required for a job. Job evaluation adds to an arrangement for contrasting jobs regarding those things the association considers vital determinants of job worth. This procedure includes various steps that will be quickly expressed here and afterward talked about all the more completely.

1. Job Analysis: The primary step is an investigation of the jobs in the association. Through job analysis, data on job substance is acquired, together with a valuation for worker prerequisites for effective execution of the job. This data is recorded in the exact, steady dialect of a job description.

2. Compensable Factors: The following step is choosing what the association “is paying for” – that is, the thing that variable or elements put one job at a more elevated amount in the job chain of importance than an alternate. These compensable elements are the measuring sticks used to focus the relative position of jobs. As it were, picking compensable components is the heart of job evaluation. Not just do these variables spot jobs in the association’s job progressive system, yet they additionally serve to advise job officeholders which commitments are remunerated.

3. Building up the Method: The third venture in job evaluation is to choose a technique for evaluating the association’s jobs as indicated by the factor(s) picked. The technique ought to allow reliable situation of the association’s jobs containing a greater amount of the elements higher in the job progression, than those jobs lower in the progressive system.

4. Job Structure: The fourth step is contrasting jobs with build up a job structure. This includes picking and relegating chiefs, arriving at and recording choices, and setting up the job progression.

5. Pay Structure: The last step is evaluating the job structure to land at a compensation structure.

Merits of job evaluation

Job evaluation is a procedure of deciding the relative worth of a job. It is a procedure which is useful actually for encircling remuneration arranges by the personnel manager. Job evaluation as a methodology is worthwhile to an organization from multiple points of view:

‘ Decrease in disparities in pay structure – It is discovered that individuals and their inspiration is needy upon how well they are being paid. Accordingly the primary target of job evaluation is to have outer and interior consistency in compensation structure so that imbalances in pay rates are lessened.

‘ Specialization – Because of division of work and subsequently specialization, an expansive number of endeavors have landed hundred positions and numerous representatives to perform them. Hence, an endeavor ought to be made to characterize a job and accordingly settle pay rates for it. This is conceivable just through job evaluation.

‘ Aides in choice of representatives – The job evaluation data can be useful at the time of determination of applicants. The elements that are resolved for job evaluation can be considered while selecting the workers.

‘ Amicable relationship in the middle of workers and administrator – Through job evaluation, agreeable and harmonious relations can be kept up in the middle of representatives and administration, so that a wide range of pay rates debates can be minimized.

‘ Institutionalization – The procedure of deciding the pay differentials for distinctive jobs get to be institutionalized through job evaluation. This aide in bringing consistency into compensation structure.

‘ Pertinence of new jobs – Through job evaluation, one can comprehend the relative estimation of new jobs in a worry.

Demerits of job evaluation

‘ In spite of the fact that there are numerous methods for applying job evaluation in an adaptable way, fast changes in innovation and in the supply of and interest for specific aptitudes, make issues of change that may need further study.

‘ At the point when job evaluation brings about considerable changes in the current pay structure, the likelihood of executing these progressions in a generally brief time may be limited by the money related breaking points inside which the firm needs to work.

‘ At the point when there is an extensive extent of motivating force workers, it might be hard to keep up a sensible and worthy structure of relative profit.

‘ The methodology of job rating is, to some degree, vague on the grounds that a portion of the components and degrees can be measured with precision.

‘ Job evaluation takes quite a while to finish, requires specific specialized staff and quite expensive.

Methods of job evaluation

Job Ranking:

As indicated by this technique, jobs are arranged from highest to lowest, in place of their worth or legitimacy to the organization. Jobs can likewise be organized by relative trouble in performing them. The jobs are analyzed in general instead of on the premise of essential considers the job; the job at the highest priority on the rundown has the most astounding quality and clearly the job at the base of the rundown will have the least esteem. Jobs are typically positioned in every division and afterward the office rankings are joined to build up an authoritative positioning. The variety in installment of compensations relies on upon the variety of the way of the job performed by the workers. The positioning technique is easy to comprehend and practice and it is ideally equipped for a little association. Its straightforwardness however attempts to its inconvenience in huge associations on the grounds that rankings are hard to grow in an extensive, complex organization. Besides, this sort of positioning is very subjective in nature and may outrage numerous workers. In this way, a more investigative and productive method for job evaluation is called for.

Job Classification:

As per this system, a predetermined number of job groups or job classes are built and jobs are allotted to these characterizations. This technique spots gatherings of jobs into job classes or job grades. Separate classes may incorporate office, administrative, administrative, work force, and so on.

Class I – Executives: Further order under this classification may be Office Manager, Deputy Office administrator, Office director, Departmental chief, and so forth.

Class II – Skilled workers: Under this classification may come the Purchasing partner, Cashier, Receipts assistant, and so forth.

Class III – Semiskilled workers: Under this classification may come Stenotypists, Machine-administrators, Switchboard administrator and so on.

Class IV – Unskilled workers: This classification may involve peons, delivery people, housekeeping staff, File agents, Office young men, and so forth.

The job reviewing system is less subjective when contrasted with the prior positioning strategy. The framework is straightforward and adequate to pretty much all representatives without a second thought. One in number point for the system is that it considers all the elements that a job involves. This framework can be viably utilized for a mixed bag of jobs. The shortcomings of the Grading technique are:

‘ Actually when the prerequisites of distinctive jobs contrast, they may be joined into a solitary class, contingent upon the status a job conveys.

‘ It is hard to compose all inclusive descriptions of a grade.

‘ The system distorts sharp contrasts between diverse jobs and distinctive evaluations.

‘ At the point when individual job depictions and grade portrayals don’t match well, the evaluators tend to characterize the job utilizing their subjective judgments.

The problems that foreign workers face in a host country: essay help online

According to the latest figures CSO (Central Statistical Office), the number of foreign workers in Mauritius has been constantly increasing and is now at approximately 39, 032. From those figures we can state that there are 27, 408 men and 11 624 women. This amount of expatriates is mainly made up of workers coming from Bangladesh, 18 429, from India, 9105, China, 4656 and Madagascar with 3596.

It is the manufacturing sector that employs the largest number of foreign workers, that is29 846, while the construction comes second with 6070 workers. Last September, the ministry of Labour took the decision to freeze the recruitment of foreign workers in the construction sector. In all case, the bar of 40 000 foreign workers in Mauritius will quickly be reached by the end of the year announcing an increase of 20% compared to 2008. Those workers are supposed to be treated the same way as local workers and to take advantage of the local welfare.. Though Mauritius did not signed ICRMW (International Convention on the protection of the Rights of All Migrant Workers), the country needs to apply its own laws. Here, it is the Employees Rights Act (2008) which stipulates all the law concerning any work related issues. This causes the migrant workers to rebel in order to voice out through violent actions. Those persons are actually migrating to another country in order to get a better living conditions, also, promises are made but rarely respected. From the point of view of management, they prefer employing migrants as they are more hard-working, skilled and cheaper compared to a local worker. It is a fact that it is not always easy for the foreign workers to cope with the working conditions applied to them and moreover their cultures and those that we have in Mauritius are not always the same.

RESEARCH OBJECTIVES

The research objectives of this study are:

‘ To explore the different difficulties that the expatriates face in the host country

‘ To explore the foreign workers’ opinions about the way they are treated

‘ Propose recommendations to those organizations who employ foreign workers to improve their working and living conditions.

LITERATURE REVIEW

Wilson & Dalton (1998) describe expatriates as, ‘those who work in a country or culture other than their own.’

Connerly et al. (2008) stated that ‘many scholars have proposed that personal characteristics predict whether individuals will succeed on their expatriate assignment.’ Due to globalization, there is the need for expatriates. Thus, companies dealing with external workers should find ways to solve problems faced by these workers and make them comfortable in their daily life so that they do not want to go back to their native country. (Selmer and Leung, 2002)

PROBLEMS FACED BY EXPATRIATES

1. Culture Shock

Hofstede (2001) defined it as ‘the state of distress following the transfer of a person to an unfamiliar environment which may also be accompanied by physical symptoms’.

According to Dr Kalvero Oberg, expatriates are bound to experience four distinct phases before adapting themselves to another culture.

Those four phases are:

1. A Honeymoon Phase

2. The Negotiation Phase

3. An Adjustment Phase

4. A Reverse Culture Shock

During the Honeymoon Phase, the expatriates are excited to discover their new environment. They are ready to lay aside minor problems in order to learn new things. But eventually, this stage ends somehow.

At the Negotiation Phase or the Crisis Period, the expatriates start feeling homesick and things start to become a burden for them. For example, they might feel discomfort regarding the local language, the public transport systems or the legal procedures of the host country.

Then the Adjustment Phase starts. Six to twelve months after arriving in the new country, most expatriates start to feel accustomed to their new home and know what to expect. Their activities become routines and the host country is now accepted as another place to live. The foreign worker starts to develop problem-solving skills to change their negative attitudes to a more positive one.

The final stage called the Reverse Culture Shock or Re-entry Shock. It occurs when expatriates return to their home country after a long period. They are surprised to find themselves encountering cultural difficulties.

There are physical and psychological symptoms of culture shock such as:

1. Physical factors

‘ Loss of appetite

‘ Digestion problems

2. Cognitive factors

‘ Feeling of isolation/ home sickness

‘ Accusing the host culture for own distress

3- Behavioural factors

‘ Performance deficits

‘ Higher alcohol consumption

2. Communication/ Language Barrier

Communication is crucial to both management and employees. Sometimes, due to this language barrier, employees do not understand what is expected from them. They thus tend to make mistakes at the workplace and conflicts arise between the parties concerned. Also, the language barrier is the major obstacle when it comes to the changing of environment for expatriates. These people usually feel homesick and lonely as they are unable to communicate with the local people they meet. This language problem becomes a barrier in creating new relationships. A special attention must be paid to one’s specific body language signs, conversation tone and linguistic nuances and customs. Communicative ability permits cultural development through interaction with other individuals. Language becomes the mean that promotes the development of culture. Language affects and reflects culture just as culture affects and reflects what is encoded in language. Language learners may be subconsciously influenced by the culture of the learned language. This helps them to feel more comfortable and at ease in the host country.

3. Industrial Laws

Laws are vital for the proper conduct of such activity as that of welcoming expatriates in Mauritius. The Ministry of Labour, Industrial Relations and employment are meant to produce a ‘Guidelines for work permit application (February 2014)’ manual for those organisations which are engaged in this activity. This manual describes procedures that should be followed in case of a Bangladeshi, Chinese or Indian worker. Any breach of the law will lead automatically to severe sanctions.

For example, Non-Citizens (Employment Restriction) Act 1973 which provides, among others, that ‘a non-citizen shall not engage in any occupation in Mauritius for reward or profit or be employed in Mauritius unless there is in force in relation to him a valid work permit.’ A request to the government should be made if an organization wishes to recruit foreign workers in bulk.

Expatriates are human beings so they should have some fundamental ‘Rights’ in the host country. In Mauritius, the contract for employment of foreign workers stipulates all the necessary information concerning the expatriates’ rights, conditions of work, accommodation and remuneration amongst others. This contract is based on the existing Labour law and the contents of the contract are to large extent the same with some slight differences in conditions of work. Mauritius has adopted the good practices in relation to labour migration and has spared no efforts to develop policies and programmes to maximize benefits and minimize its negative consequences. However, there are still improvements to be brought to the living and working conditions of foreign workers.

4. Living Conditions

The NESC published a report in February 2007 which advocated that foreigners working in the island should enjoy the same rights as local workers. In reality, many foreign workers suffer from bad working conditions. Some workers have intolerable living conditions, sleeping in dormitories on benches without mattresses or in tiny bedroom containing many people. Those who intend to voice out or those considered as ‘ring leaders’ are deported.

In 2006, some workers from China and India who tried to form a trade union or to protest were deported. Peaceful demonstrations often turn into riots which the police brutally suppressed.

In August (2007) some 500 Sri Lankans were demanding better wages at the company Tropic Knits and in response, the Mauritian authorities deported 35 of them. At the Companie Mauricienne de Textile, one of the biggest companies of the island employing more than 5,000 people, 177 foreign workers were deported after taking part in an ‘illegal demonstration’ about the lack of running water, the insufficient number of toilets and poor accommodation.

During the year (2011) a visit at two of the Trend Clothing’s Ltd by (Jeppe Blumensaat Rasmussen) shows how several Bangladeshi workers were living under inhumane living conditions. Furthermore, workers were paid an hourly rate of Rs15.50, which was even less than the previous amount of Rs16.57 bringing the monthly salary to an amount of Rs3500 ‘ 5000 depending on overtime. One woman even said that she had work 43 hours and was paid only 32. The Bangladeshi workers were also living in dormitories containing several holes in the ceilings and signs of water damage next to electrical sockets. We also have the case of the migrant Nepal’s workers who decided to leave Mauritius due to their bad working and living conditions. They were living in an old production space which was turn into dormitories, hosting 34 migrants’ workers. There were no water connection in the kitchen and no running water to flush toilets. They also did not receive any allowance and their salary was decreased from Rs5600 to Rs5036 per month. On the 9th June 2011, eight workers wrote a letter to their boss, giving their employer a month notice.

In September (2013) more than 450 Bangladeshi workers working in the textile company, Real Garments, in Pointe-aux-Sables were on strike claiming for better working conditions. They also protested in the streets of Port-Louis and had gone to the Ministry of Labor to submit their claims one day before (L’Express; Mauritius). Fourteen Bangladeshi workers were considered as the main leaders and were deported by the authorities.

5. Foreign workers and Income

Foreign workers take the decision to leave their native country to work in other countries with the aim of making more money and then send it to their family. However, they did not predict that they will be paid less than that was promised to them before their departure to the host country. Some organisations pay them only half of what they were supposed to. Having already signed their contract, they are forced to work hard for a low salary. Many Bangladeshis, Indians or Chinese choose to leave the host country after their contract termination and the more courageous ones stay and renew their contract for more years. Despite their low paid jobs, it is still better than in their native country where they are even more exploited or where life is far more difficult for them.

The Employment Rights Act (2008) stipulate that if a local worker ‘works on a public holiday, he shall be remunerated at twice the national rate per hour for every hour of work performed.’

However, some expatriates who are forced to work on a public holiday are usually paid the same amount as a usual day. As human beings, they should be treated as any other worker either local or foreign, with the same rights and possibilities.

6. Unions

Unionism is about workers standing together to improve their situation, and to help others. Some unions are reactive, that is waiting for the employer to act and then choosing how to attack or respond and others are proactive, that is developing their own agenda and then advancing it wherever it’s possible. When unions and management fail to reach agreement, or where relations break down, the union has the option of pursuing industrial action through a strike, a go-slow, a work-to-rule, a slow-down, an overtime ban or an occupation.

However, the Expatriates are often not aware of the law protecting their rights (e.g. many migrant workers are not informed of the laws that provide them with the same level of protection as Mauritian nationals and Employers refused to recognize union representatives. It is also often difficult for unions to get access to and organize for foreign workers.

An ICFTU-AFRO (the African regional organization of the former International Confederation of Free Trade Unions) mission to Mauritius in February 2004 was told that the few men they saw were mainly supervisors who were said to be hostile to unions.

During 2006 there were a series of reports that workers from China and India who had tried to form a trade union or protest against their employers had been summarily deported. On 23 May 2006, policemen armed with shields and truncheons beat female workers from Novel Garments holding a sit-in in the courtyard of the factory in Coromandel protesting against plans to transfer them to other production units.

According to the MAURITIUS 2012 HUMAN RIGHTS REPORT, section 7: Worker Rights, a. Freedom of Association and the Right to collective Bargaining, the constitution and law provide for the rights of workers, including foreign workers, to form and join independent unions, conduct legal strikes, and bargain collectively With the exception of police, the Special Mobile Force, and persons in government services who were not executive officials, workers were free to form and join unions and to organize in all sectors, including in the Export Oriented Enterprises (EOE), formerly known as the Export Processing Zone.

Alzheimer's disease (AD): college essay help online

Alzheimer’s disease (AD) is the most common cause of dementia and chronic neurodegenerative disorder among the aging population. Dementia is a syndrome characterized by progressive illnesses affecting memory, thinking, behavior and everyday performance of an individual. Dementia affects older people, but 2% of people starts developing before the age of 65 years (Organization 2006). According to the Worlds Alzheimer Report 2014, 44 million of people are living with dementia all across the globe and its set to get doubled by 2030 and triples by 2050 (Prince, Albanese et al. 2014). Its estimated that 5.2 million Americans have AD in 2014 (Weuve, Hebert et al. 2014). This includes 200,000 individuals under 65 age have early onset of AD and 5 million people of age 65 and above (Weuve, Hebert et al. 2014). Women are affected more than men in AD and other dementias (Weuve, Hebert et al. 2014). Among 5 million people of above 65 years of age, 3.2 million are women and 1.8 million are men (Weuve, Hebert et al. 2014). The Multiple factors that leads to AD are age, genetics, environmental factors, head trauma, depression, diabetes mellitus, hyperlipidemia, and vascular factors. There are no treatments for AD that slows or stops the death and malfunctioning of neurons in the brain, indeed many therapies and drugs are aimed in slowing or stopping neuronal malfunction (Association 2014). Currently five drugs have been approved by the U.S food and Drug Administration to improve symptoms of AD by increasing the amount of neurotransmitters in the brain (Association 2014). It has been estimated that Medicare and Medicaid covered $150 billion of total health care for long duration care for individuals suffering for AD and other dementias (Association 2014).

Diagnostic criteria

Neurological and Communicative Disorders and Stroke’Alzheimer’s Disease and Related Disorders Association (NINCDS’ADRDA) in 1984 proposed a criteria which is as follows (1) clinical diagnosis of AD could only be designated as ‘probable’ while the patient was alive and could not be made definitively until Alzheimer’s pathology had been confirmed post mortem (McKhann, Drachman et al. 1984) and (2) the clinical diagnosis of AD could be assigned only when the disease had advanced to the point of causing significant functional disability and met the threshold criterion of dementia (McKhann, Drachman et al. 1984).

In 2007, IWG proposed criteria that AD could be recognized in vivo and independently of dementia in the presence of two features (Dubois, Feldman et al. 2007). The first criteria was a core clinical that require evidence of a specific episodic memory profile characterized by a low free recall that is normalized by cueing (Dubois and Albert 2004). The second is the presence of biomarker evidence on AD which include (1) structural MRI, (2) Neuroimaging using PET (18F-2-fluoro-2-deoxy-D-glucose PET [FDG PET] or 11C-labelled Pittsburgh compound B PET [PiB PET]), and (3) CSF analysis of amyloid ?? (A??) or tau protein (total tau [T-tau] and phosphorylated tau [P-tau]) concentrations (Dubois, Feldman et al. 2007)

In 2011, the NIA and Alzheimer’s association proposed guidelines to help pathologist and categorizing the brain changes with AD and other dementias (Hyman, Phelps et al. 2012). Based on the changes absorbed, they classified into three stages (a) preclinical Alzheimer’s disease, (b) mild cognitive impairment (MCI) due to Alzheimer’s disease, (c) Dementia due to Alzheimer’s disease (Hyman, Phelps et al. 2012). In pre-clinical AD, the individual have changes in the cerebrospinal fluid but they don’t develop memory loss. This reflects that Alzheimer’s related brain changes occur 20 years onset before the symptom occurs (Petersen, Smith et al. 1999, H??nninen, Hallikainen et al. 2002, Reiman, Quiroz et al. 2012). In MCI due to AD, individuals suffering from MCI has some notable changes in thinking that could be absorbed among family members and friends, but do not meet criteria for dementia (Petersen, Smith et al. 1999, H??nninen, Hallikainen et al. 2002, Reiman, Quiroz et al. 2012). Various studies show that 10 to 20% of individual of age 65 or above have MCI (Petersen, Smith et al. 1999, H??nninen, Hallikainen et al. 2002, Reiman, Quiroz et al. 2012). Its is estimated that 15% and 10% progress from MCI to dementia and AD every year (Manly, Tang et al. 2008). In Dementia due to AD, Individual is characterized by having problem in memory, thinking and behavioral symptom that affects his routine life (Association 2014).

In 2014, IWG proposed criteria for maintaining the principle of high specificity, based on the framework they classified as follows (1). Typical AD can be diagnosed in the presence of an amnestic syndrome of the hippocampal type, which could be associated with different cognitive or behavioral changes and having one of following changes in vivo AD pathology such as decreased A??42 together with increased T-tau or P-tau concentration in CSF or increased retention on amyloid tracer PET (Dubois, Feldman et al. 2014). (2) Atypical AD could be made in the presence of the following, which includes clinical phenotypes that is consistent with one of the known atypical presentation and at-least one of the following changes indicating in-vivo AD pathology (Dubois, Feldman et al. 2014). (3) Mixed AD can be made in patients with typical or atypical phenotypic feature of AD and presence of at-least one biomarker of AD pathology (Dubois, Feldman et al. 2014). (4) Preclinical states of AD require absence of clinical symptoms of AD (typical or atypical phenotypes) and inclusion of at-least one biomarker of AD pathology for identifying the presence of asymptomatic at-risk state or the presence of a proven AD autosomal dominant mutation of chromosome 1, 14 or 21 for the diagnosis of presymptomatic change (Dubois, Feldman et al. 2014). (5) To differentiate biomarkers of AD diagnosis from those of AD progression (Dubois, Feldman et al. 2014).

Neuropathology

Dr. Alois Alzheimer, a German physician in 1906 observed pathologic abnormalities in autopsied brain of women who suffered from memory related problems, confusion and language trouble (Prince, Albanese et al. 2014). He found the presence of plaques deposits outside the neurons and tangles inside the brain cells (Prince, Albanese et al. 2014). Thus, the senile plaques and neurofibrillary tangles have became two pathological hallmarks of AD (Prince, Albanese et al. 2014).

The histological hallmarks of AD in brain are intracellular deposition of microtubule-associated tau protein called neurofibrillary tangles (NTF) and extracellular accumulation of amyloid ?? peptide (A??) in senile plaques (Bloom 2014). A?? derived from the larger glycoprotein called amyloid precursor protein (APP) can processed through two pathways amyloidogenic and non-amyloidogenic (Gandy 2005) . In amyloidogenic pathway ??-secretase and ??-secretase proteolysis APP to produce soluble amyloid precursor protein ?? (sAPP??) and a carboxyl terminal fragment CTF?? (C99) to produce A?? peptides (Gandy 2005). Alternatively APP is proteolysed by the action of ?? and ??- secretase generating soluble amino terminal fragments (sAPP??) and a carboxyl terminal fragment CTF?? (C83) to produce non amyloidogenic peptide (Esch, Keim et al. 1990, Buxbaum, Thinakaran et al. 1998).

Figure 1. Amyloidogenic and non-amyloidogenic pathways of APP

APP is cleaved by ??-?? secretases (amyloidogenic) releasing amyloid A?? peptide(s) or by ??-?? secretases (non-amyloidogenic), adapted from (Read and Suphioglu 2013)

The amino acid sequences of A?? include A??42 and A??40. During normal condition A??40 is 10-fold higher concentration level, when compared to A??42 central nervous system (CNS) (Haass, Schlossmacher et al. 1992). However during inflammation, stress and injury in the brain causes A??40 and A??42 for a dynamic change and leads to an upregulation of A??42. In AD A??42 accumulates as misfolded proteins in extracellular space (Gurol, Irizarry et al. 2006).

Tau is a microtubule-associated protein (MAP), most abundant in central and peripheral nervous system that help in assembly and stabilizing of microtubules that is crucial among the cellular morphology and trafficking (Tolnay and Probst 1999, Iqbal, Liu et al. 2010, Cohen, Guo et al. 2011). NFT is the major hallmarks of AD patients in brain. In AD, phosphorylation of tau leads to the loss of neuronal function and death. Degeneration of synapse strongly correlates with cognitive decline in AD, while soluble oligomeric tau contribute to synapse degeneration (Morris, Maeda et al. 2011). Although, the protein aggregating into NFT are unclear, number of NFT and the progression of neurodegeneration as well as dementia showed a significant positive correlation in AD (Cohen, Guo et al. 2011) (Arnaud, Robakis et al. 2006).

Figure 2. AD pathology

Deposition of A?? and tau in neurons. The boxes shows the different biomarkers which are used for examination, adapted from (Nordberg 2015)

Biomarkers

A characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic process or pharmacologic responses to a therapeutic intervention (Atkinson, Colburn et al. 2001).The pathology of neurodegenerative for individuals is provided by using imaging and fluid biomarkers (Dickerson, Wolk et al. 2013).

CSF Biomarkers

The CSF biomarkers play a major role in diagnosing probable AD. However, abnormality in the CSF is found long before the symptoms occur.

Amyloid beta (A??) is synthesized in brain and diffused into CSF. In cognitively normal individuals A?? appears in moderate condition, however for individuals suffering from AD has reduced A??42 in CSF which act as an useful biomarker during diagnosis (Sunderland, Linker et al. 2003). The low levels of A??42 appears at-least 20 years prior to clinical dementia in individuals with familial AD mutations (Ringman, Coppola et al. 2012). In addition, reduced levels of A??42 appear early in cognitively normal which precedes MCI by years (Fagan, Head et al. 2009). Therefore A??42 cannot be used individually as a specific biomarkers in discriminating from other dementia hence it should be combined with other biomarkers for determining specific dementia.

Tau in CSF relates with the progression of tau related pathology in cerebral cortex. Increased in the tau level in CSF for AD patients reflects the neuronal loss in brain (de Souza, Chupin et al. 2012). Similarly, like A??42 elevation in tau seems to occur at cognitive normal individuals (Fagan, Head et al. 2009). Hence its important to consider other biomarker for differential diagnosis of AD. Moreover, phosphorylated (p)-tau have 85% sensitivity and 97% specificity in discriminating AD from other neurological disorder (Tan, Yu et al. 2014). P-tau is therefore more superior to t-tau in differentiating diagnosis, thus helps in overcoming the short coming of A??42 and t-tau in differentiating diagnosis (Buerger, Zinkowski et al. 2002). CSF t-tau and p-tau occurs after A??42 initially aggregates and increases as amyloid accumulates (Buchhave, Minthon et al. 2012).

Imaging Biomarkers

Structural MRI

Structural MRI studies helps in subjects diagnosed with AD and MCI who consistently show change in atrophy in entorhinal cortex and hippocampus of medial temporal lobe (MTL) and cortical thinning in AD signature region are the MRI sign of emerging AD (Du, Schuff et al. 2001). MRI studies focus on normal subjects who have maternal history of AD, has reduced volume of MTL and precuneus (Berti, Mosconi et al. 2011). Voxel based analysis on whole brain determines the structural MRI could be used to identify the presence of brain atrophy in cortical regions up to 10 years before clinical symptoms of AD, with greater impact in MTL (Du, Schuff et al. 2001).

Positron Emission Tomography (PET)

PET is based on the principle of spontaneous emission of positron by the nuclei of unstable radionuclide, whose number of protons exceeds that of electrons (Granov, Tiutin et al. 2013). PET images in-vivo distribution of radiopharmaceutical substances with higher resolution and sensitivity (Fahey 2003). The positron which is a ??-particle with positive charge annihilates with an electron of negative charge, releasing equal number of gamma photons of same energy (511 keV) moving in 180 degree opposite to each other to conserve momentum (Kukekov and Fadeev 1986, Fahey 2003).

The components involved in the PET scanner are movable bed, detector, gantry and computer. The detector consist of multiple crystals attached with a photomultipliers (Granov, Tiutin et al. 2013). The interaction among the gamma photon and crystal produces scintillation which induces electric impulse in the photomultipliers and could be detected and processed using computer (Khmelev, Shiryaev et al. 2004). If the two detectors are in coincidence, then the positron emitted along the line connects the detectors which is termed as line of response (LOR) (Fahey 2003).In most of the scanners the two detectors are in coincidence, if they are detected with in 10 seconds (Fahey 2003). The sensitivity of the PET can be increased by increasing the number of detectors into a ring. The data examined from the individual is acquired in computer in the form of sinogram. There are different techniques of reconstruction such as filtered back projection (FBP), Iterative Method, OSEM are used for reconstructing an image. In modern PET scanners, LSO crystals with minimum size are used which permits high resolving capacity, high resolution, effective algorithm for image reconstruction and field of view sufficient for single stage scanning of the brain or heart (Granov, Tiutin et al. 2013).

The cyclotron, a particle accelerator provides the production of radionuclides for clinical use. Heavy particles are accelerated to a higher energy level of 5-100MeV using cyclotron (Granov, Tiutin et al. 2013). The beam of particles is focused on the target substance by using radio magnetic lens. The target material is bombarded with heavy particle to generate the required radionuclide (Granov, Tiutin et al. 2013).

The requirements of a good tracer which include higher affinity towards the target receptor, selectivity versus other receptors (Bmax / Kd of at least 10-fold,where Bmax is the density of the receptor and Kd is the concentration of the radiotracer) and good permeability (McCarthy, Halldin et al. 2009). The tracers has to be a poor substrate of p-glycoprotein if it has been developed for imaging targets in brain (Terasaki and Hosoya 1999). It has been found that low hydrogen bonding plays an important role in predicting good PET tracers (McCarthy, Halldin et al. 2009). For a good tracers, time to binding equilibrium should be long relative to washout of non-specifically bound tracer, but short relative to isotope decay (McCarthy, Halldin et al. 2009) .

Amyloid PET

PET imaging of amyloid binding agent Pittsburg compound B (PET-PiB) helps to determine the ??-amyloid (A??) and its distribution over the brain that were previously restricted to postmortem studies. The longitudinal study provided evidence relating with a direct relationship between PET-PiB and likelihood of conversion from clinical diagnosis of MCI to AD over three years (Klunk 2011). Since there is significant overlap between amyloid imaging and CSF- A??42, researchers attempt to address the areas where these two biomarkers may be equivalent and areas where one measurement could hold unique advantages (Vlassenko, Mintun et al. 2011). In addition, current hypothesis states that higher amyloid burden assessed by florbetapir 18F (18F-AV-45) amyloid PET is related with lower memory performance among clinically normal older subjects (Sperling, Johnson et al. 2013).

FDG-PET

FDG-PET (2-deoxy-2[18F]fluoro-D-glucose) is one of the neurodegeneration biomarkers included in the new research criteria proposed for the various diagnosis of AD by the International working group (IWG) in 2007 and 2010, also in the new diagnostic criteria of AD by National Institute of Aging-Alzheimer Association (NIA-AA) (McKhann, Drachman et al. 1984, Dubois, Feldman et al. 2007, Dubois, Feldman et al. 2014). FDG-PET measures the local glucose metabolism for neuronal activity at resting state to asses cerebral function. It is evident that AD individuals has reduced FDG uptake predominantly in tempoparietal association areas, precuneus and posterior cingulate region (Minoshima, Giordani et al. 1997). These changes could be observed in subjects from 1-2 year before the onset of dementia and are closely related to cognitive impairment (Herholz 2010). Although MRI is more sensitive in detecting and monitoring hippocampal atrophy (Fox and Kennedy 2009), FDG is more sensitive in detecting neuronal dysfunction in neocortical association areas. Hence FDG is well suited for monitoring the progression of the disease syndrome (Alexander, Chen et al. 2002).

Regional functional impairment of glucose metabolism in AD is related with the severity and progression of different cognitive deficits (Langbaum, Chen et al. 2009)

INDIAN NATIONALISM (1757-1947)

Great Britain had colonized the nation of India amid the 1700’s when East India organization picked up control of India in 1757 however the Company ruled India without impedance from British Government until 1800s With the measure of crude materials and the developing business for British products, the British government starts to build its control. In 1858, British government takes complete control of India after the Sepoy Mutiny and the British subdued and displayed bigotry against local Indians. Indian nationalistic developments, for example, ones drove by the Indian National Congress, had endeavored endeavors at lead toward oneself yet had never been entirely effective. The immense supporter of a free India, Gandhi, was influential in the Indian Pro-independence Movement. Known as the Mahatma, or the Great Soul, Gandhi constrained change and an end to British colonization through a strict approach of peacefulness, or detached resistance. This development picked up energy after the world war 1 however the llianwala Bagh Massacre where number of individuals had gathered at Jallianwala Bagh in Amritsar for going to the yearly Baisakhi reasonable were encompass by the armed force at the requests of General Dyer and opened fire on the swarm, slaughtering several individuals. The Aftermath of this slaughter brought about general hubbub when the swarms took to the roads in numerous north Indian towns. The British utilized ruthless suppression, trying to embarrass and threaten individuals. Individuals were flagellated and towns were besieged and this savagery constrained Gandhi to stop the development

A feeling of solidarity and patriotism was motivated by history and fiction, folktale and melodies, prevalent prints and images. Abanindranath Tagore’s picture of Bharat Mata and Bankim Chandra melody Vande Mataram united numerous individuals and groups During the Swadeshi Movement; a tri-shading (red, green and yellow) banner was outlined. It had eight lotuses speaking to eight regions of British India and a sickle moon, speaking to Hindus and Muslims In 1921, Gandhi had planned the tri-shading Swaraj banner (red, green and yellow) with the turning wheel at the focal point. This banner spoke to the Gandhian perfect of self improvement and turned into an image of resistance. This ingrained pride and united the Indians.

However Despite the impact of Gandhi, India fell into confusion. Hindu individuals needed an all-Hindu state and Muslims, drove by the Muslim League needed a different state. Gandhi was killed in light of this contention. In the end, Pakistan was framed as a different Muslim state. Along these lines, the quality and will of the basic individuals both attained to Indian autonomy and shredded India. The tale of Mahatma Gandhi and Indian patriotism is one of history’s most prominent ironies

PAN AFRICAN NATIONALISM

Soon after the end of World War II, most European countries were sometime during closure magnificent control of Africa. Skillet Africanism got to be overwhelming on the mainland of Africa. Container Africanism is a nationalistic development that requires the solidarity of all African countries. While is has immense impact, for example, the African National Council, it has never succeeded in uniting all of Africa. Difference and a hefty portion of the issues confronting Africa since the end of WWII into present-day can be faulted for European colonialism. Political defilement is uncontrolled in light of the fact that European colonialists left without making stable governments. Ethnic pressure exists in light of the fact that European fringes were made with no idea given to the tribal framework. Tribalism is one of the greatest impediments to Africa in light of the fact that conventional adversaries were contained inside one European-made outskirt. A decent sample of ethnic strain is the contention between the Hutus and Tutsis in which 1,000’s on both sides were slaughtered and numerous more fled to Zaire to look for shelter. Both the countries of Rwanda and Burundi had noteworthy populaces of Hutus and Tutsis, both customary tribes. Notwithstanding the mind-boggling issues, there have been some significant achievements where patriotism has brought about positive change.

The principal Arab-Israeli clash set two nationalistic developments against one another. The War for Independence (1948-49) was the disappointment of the Arab world to prevent Israel from being framed as a Jewish sovereign state. This war brought about Jerusalem falling under the control of the Israelis and the end to a proposed arrangement for a free Palestinian state to be shaped. The Suez War of 1956 brought about Nasser’s Egypt losing control of the Sinai Peninsula, debilitating the dependability of the immeasurably critical Suez Canal. The Six-Day War of 1967 saw large portions of the encompassing Arab countries assault Israel and afterward continue to lose region (the challenged ranges recorded above) to Israel in under a week. The Yom Kippur War of 1973 was an Egyptian assault over the Sinai and turned into a Cold War occasion as the Americans and Soviets got to be included. Nasser’s successor, Anwar al-Sadat, (envisioned here) was the first Arab pioneer to perceive Israel as a country. For this alone, he was killed, viably finishing any endeavors at enduring peace. The contention proceeds with today.

Ghana:

During the days of empire-building, the nation now called Ghana was called the Gold Coast, an English settlement. The nationalist leader Kwame Nkrumah called on the souls of the African people by renaming the obviously imperial European “Gold Coast” to something that back to the golden age of western Africa, the Empire of Ghana. As he was a believer in the principles of Gandhi. He established autonomy for Ghana through civil defiance and passive resistance. Through the superiority and bravery of Nkrumah and the Ghanaian people, Great Britain left. To quote the words of Nkrumah, “No people without a government of their own can expect to be treated on the same level as people of independent sovereign states. It is far better to be free to govern or misgovern yourself than to be governed by anybody else . . .

Kenya:

The situation in the British colony of Kenya was similar to Ghana. The exploitation of Kenyan resources and oppression of its people were the typical traits of British domination. The path to independence, however, was radically different. Kenya’s nationalist leader, Jomo Kenyatta, initiated his movement by means of passive confrontation. However, Great Britain refused to end its imperial rule of Kenya and had confined Kenyatta for paramilitary warfare he may or may not have asked for. Irrespective, the Mau Mau, Kenyan guerilla troops, resisted British troops until Great Britain released Kenyatta and left in 1963 with Kenyatta as the prime minister of a free Kenya.

South Africa:

The circumstance in South Africa was distinctive. It had encountered colonialism, however the nation had picked up self-rule when the new century rolled over. White setters called Afrikaners had control of the South African government and had forced a social structure known as apartheid. Apartheid comprised of two social classes, upper white and lower dark. The races were kept separate and unequal, with the dark populace enduring awful ill-uses. Illustrations of this misuse incorporate pass cards for blacks just, voting rights for whites just, and isolated reservations called Home Lands.

However the most acclaimed of all African patriot pioneers Nelson Mandela talked against these segregations and began his hostile to apartheid developments. Anyhow Mandela, because of taking a stand in opposition to apartheid, was detained for a long time and not discharged until the mid 1990’s. South African president F.W. De Klerk liberated Mandela and finished the bigot convention. In 1994, South Africa had its first free race and Mandela was chosen president. Mandela and De Klerk earned the Nobel Peace Prize together for their endeavors.

Canada Current Immigration Policies: essay help online free

A policy is a plan or course of action that an organized body undertakes to guide in decision making and other matters. Immigration policies are meant to guide the immigration of people into a country for which ever reason. Canada is a country found on the northern part of North America’s continent. It has ten provinces and three territories. Canada is a constitutional monarchy and a federal parliamentary democracy headed by queen Elizabeth II. It is a bilingual state that has a diverse cultural base owing to the large influx of immigrants to the country. The country’s economy is among the world largest since it depends on its natural resources and developed trade networks.

Canada has been shaped greatly by immigration in society and culture. With Its small population and large tracts of unoccupied Canada’s immigration policy was fuelled by the need for expansion with immigrants encouraged to settle in rural areas .In the early 20th century the country began to control the flow of immigrants using policies that excluded the applicants of non Europeans .1n 1976 new laws removed the ethnic criteria and it became a destination for all from a variety of countries.

There are three categories of immigrants the family class which consists of those closely related, independent immigrants who are admitted on the basis of skill capital and labor market requirements and refugees. When applying for settlement immigration officers are instructed to give priority to family reunifications and refugees before independent job seekers with skill or capital without families. Arrivals in the family category are usually unskilled or the skills they posses do not match the community they have settled in thus disrupting the labor market. This results to economic insecurity which might create disappointment and hostility among the immigrants or among Canadians who feel threatened by the newcomers.

Canada’s immigration policy encourages dispersal of immigrants across the country. Current policy has attempted to encourage immigrants to settle in smaller communities in the less-populated province of Canada. The organizations within the society that are tasked with the formulation of immigration policies and regulations are churches, employers, organized labor groups and community-based and ethnic organizations. Many of these organizations aims is to promote family reunification and to attain financial adjustment schemes.

Canada policy is non discriminatory to ethnicity however individuals suffering from diseases that pose a danger to the public, those with no clear means of financial support or criminals and terrorists are excluded. An undetermined number of persons in this undesired category have however gained entry through back doors while others who have been admitted rightfully on short term visas choose to remain by extending the time permitted by the Canadian law. The group of those entering the country illegally has grown for the recent and has become a major challenge to the government especially at border crossings and airports. This group usually operate in low tones and are unnoticed till they try to acquire some public service which will bring them to the attention of government authorities .the government is working towards sealing any loop holes that have facilitated the admission of persons not authorized under the current regulations and legislations. Claims falsified by refugees status trying to avoid normal overseas screening and processing constitute one of the more serious problems confronting immigration officials.

In accommodating the immigrants Canada provides immigrants with language training and access to Canada’s national health care and social welfare programs. However, the immigrants in the 80s do not match the economical success of those in the 90s and many find difficulty in finding jobs according to their qualifications. Other immigrants are not fluent in either English or French to be able to exploit their degrees while other qualifications are not recognized by the country.In employment a Canadian born income rises same as those of European origin individuals unlike the non -white Canadians who receive low income rates.

The admission of highly skilled professionals to Canada from less developed countries has continued to provoke controversy since the governments of these countries where these immigrants originate complain of poaching of people they cannot afford to lose. Canada has maintained the need for freedom of movements of people in the midst of the controversy that it should not encourage the outflow of trained individuals from the regions that require there services.

For the immigrants who are seeking asylum Canada is known for having a fairly liberal policy on asylum. Any person who arrives in Canada can apply for refugee status at any border, airport, or immigration office inside the country. Anyone who arrives and claims to be a refugee Canada will look at the claim even if they are could not be as considered to be a in other countries. The process is divided into two a claim is submitted to Citizenship and Immigration Canada . CIC determines within three days whether the claim is eligible to be referred to the Immigration and Refugee Board , the body that makes the final determination as to whether the applicant will receive protected status. After a person has received refugee status, he or she can apply for permanent residency. This system has been criticized as to encourage backdoor applications and posing a threat to security since after they apply they are free to move around as they wait for their determination

The Canadian policy is divide into two parts temporary entry to the country and permanent entry. Under the temporary entry one can apply while inside the country or outside the country. While outside the one applies for a visitor visa when they wish to visit the country as a tourist or a visitor. The purpose of such a visit should be to visit relatives, to attend a business meeting, to attend a conference or convention, pleasure trip or participating in a cultural show. the second class is the student authorization or the student visa which is granted to a person who wishes to come to the country to study as an international student. The third class is the employment authorization or work permit which is granted to one who wishes to come to Canada and work for a Canadian company. It is referred to work permit visa in many countries. Under any of this classes one can apply for an extension of their visas while they are within the country. While in the country one may apply for an immigrant visa as a conventional refugee also referred to as a political asylum work permit visa as a live-in-caregiver known as a domestic help, immigrant visa of Canada as a spouse granted to an application made if one gets married in Canada while on a temporary visa and immigrant visa of Canada under humanitarian and compassionate reasons. If an individual changes the visa status this may lead to permanent immigration visa of Canada.

One can apply for permanent immigration to Canada under three categories while outside Canada. In the independent class assessment is done based on a point system. It is a very popular class also called professional class or skilled worker class. This category is based on an individual’s desire to come to Canada based on qualification, work experience and knowledge of English or French. The other class is the entrepreneur class investor class or self employed class. It is also known as business migration class. Entrepreneur class and self employed is for individuals who wish to start a business in Canada while the investor class is for those who do not wish to start a business in Canada. Applying for immigrant visa to Canada under the family class is for those who have close relatives in Canada under family sponsorship.

Canadian citizens and permanent residents may make an application to sponsor their relatives under the class of family class relatives and private sponsorship for refugees. Another application is by a permanent resident if one wishes to stay outside Canada for more than six months and wants to return. It’s called a return resident permit. A person can be granted Canadian citizenship provided he or she is a permanent resident of Canada for more than three years. When applying for proof of citizenship, also called citizenship certificate the applicant may do this while within or outside of Canada.

Canada is currently a country of choice for many people from all over the world. That may not be the case in future, especially for highly skilled people. The current policies have both positive and negative effects to the society of Canada.. Some of the positive impacts of the current policies include refocusing the federal skilled worker program, an initiative to bring in skilled trades to the country who bring with them jobs and investments.

Increased protection for caregivers who come into the country for the nanny jobs or housekeeping. Those who go into foreign countries are usually abused by their workers at times and end up working in deplorable conditions such as working for long hours without time to rest, depriving them day-offs and confiscation of vital documents such as passports for the immigrants. Some also face sexual harassment which is against the laws .The immigrants are thus faced with difficult conditions yet they cannot report or if reported they cannot get help. The current policies have therefore come in handy to protect this individuals from such torture. Permanent resident status to be granted to eligible students. The students who apply for student visas and perform exemplary well will be granted permanent residency in Canada after completion of studies. This can enable students to acquire citizenship and settle in the country after completion of studies. This ensures a retention of skilled people to work towards the growth of the economy.

The current policies have helped in addressing the current short-term labor market needs for the country because of the small population of Canada which cannot meet its labor requirements. The immigrants solve the labor situation which otherwise the country would not have addressed.

These policies have their negative sides. In the long term Canada will be viewed as no longer welcoming as it was. These include decision to wipe out immigration application backlogs legislatively. The applications of immigrants to get visas for whatever reason has been denied by immigration officers thus preventing serious developments on either the job market or education sector. A suspension or delay on family sponsorships which will deny the coming in of family to reunite with the rest of the families. This will affect the status of those who seek to migrate to Canada for the fear of being isolated from their families.

Reliance on temporary foreign workers to meet labor market needs. These has affected the attitude of the skilled workers who jet into the country and have not been able to get jobs. The Canadian citizens at times feel insecure by the immigration of the people into the country since they view them as a threat to their jobs and opportunities in the country. Hostility has been reported against the immigrants to an extent of some losing their lives. Organized crime has been witnessed against the immigrants to scare them so as to instill fear in them.

Tightened citizenship requirements which has locked out a lot of people who have genuine reasons to apply for the citizenship. Some of the requirement has locked out skilled workers and potential job creators to get into the country. Jobs would have boost the economic state of the country but due to the being looked out vast opportunities are also shut out. A list of refugees tagged as safe whose claims would be checked vigorously to determine if the claims are true. This has affected those who genuinely seek to immigrate as refugees.

Mandatory detention of asylum seekers who arrive for the fear terrorist or criminal activities especially after the 9/11 attack on the us. The asylum seekers will not be allowed to walk freely before the determination of their pending applications. This usually creates unnecessary anxiety for the asylum seekers.

These policies are made in a flashy speed and the breadth of them is likely not to be understood by the masses. The way the policies interact with each other is also an issue that will impact negatively on the society.

Conclusion

The current policies on immigration has impacted the society of Canada in both negative and positive ways. Some have been very fruitful to the growth of the economy and the cultural state of the country. The cultural state of the country has been made diverse by the different origins of the immigrants. the economic growth has been made possible by the influx of highly qualified individuals to the job market and the coming in of investors and job creators.

Canada has however been accused of poaching of the best brains from less developed and still developing countries worldwide. In its defense however it has said that there is freedom of movement for all the people.

In general the current immigration policies have helped in several ways for the betterment of the society but has introduced some problems too to the people living in Canada.

Sex Offenders in the Community

The United States government has rules in place to register the names of sex offenders, but unfortunately seems to overlook the idea of sex offenders living near children. In that respect, there is an injustice in the fact that sex offenders live on the same streets as children without parole officers making this information explicit to the parents. There are many child molesters who, even if they do have a professional job, work near minors. The government has laws, which state that a sex offender must be registered, but there are no laws saying that a sex offender cannot live around children. I do not agree with the idea that sex offenders are allowed to live in communities near children. In order to keep our children safe, child molesters should be banned from living and working near a school.

Realistically, allowing sex offenders to continue living near school systems enables them to target individuals, the majority of which are adolescents. Unknowingly, I worked with a sex offender when I was sixteen. Between the ages of sixteen and eighteen, a different sex offender targeted me. Any child could come into contact with a situation in which she is vulnerable and unaware of the danger. As a young person, one should not have to worry about whether or not he or she will be a victim of rape or sexual assault. I was fortunate enough not to be a victim, but I could have been. There was another situation where I had to stay with my grandparents for a period of time because my parents were fearful of the child molester who lived nearby. These are perfect examples of why we need laws that regulate an offender’s proximity to young children. Individuals should not have to be frightened in their daily life.

According to Understanding Child Molesters, there are a number of ways in which a sex offender may be disciplined, including probation, parole, and incarceration. When an individual decides to assault another person, there are consequences, such as having a parole officer, experiencing felony or misdemeanor changes, and registering as a sex offender’among many other methods of discipline. Even though a sex offender has to register every year, he is able to continue living in the community. This registration is compiled into an online database, but some individuals may have difficulty accessing this information due to a lack of technology. Sometimes sex offenders even have jobs where they work with minors and this should be prevented to minimize the perpetuation of a reoccurring crime.

The Washington Department of Corrections goes into further detail regarding sex offenders who live in our communities. The sex offenders must allow their parole officers to know where they live, and the parole officers must visit the sex offender regularly. Parole officers must be notified if the sex offender moves, and the parole officer must also approve of where the offender lives. From this point, sex offenders must become registered and allow the neighborhood to know that they are living within the community (‘Rules’). Registration alone is not sufficient because having their name on list will not prevent sex offenders from committing future sexual assault.

After a person becomes known as a sex offender, he must follow precise supervision. A parole officer will then monitor the offender for a period of time that is determined by the court system. Then, the offender will register as a child molester, and continue to do so indefinitely. By order of the court, he cannot leave the state. A parole officer will make a determination of whether or not the sex offender is allowed to live in a particular location. If the offender decides to move, he must also get the approval of the parole officer (‘Rules’).

The offender’s parole officer will ensure that the offender does not have possession of a computer or any other forms of media. Having possession of magazines, computers, televisions, phones, or any similar item could enable the offender to have access to pornography. The offender must also ensure not to attend any events partaking in an adults’ club. Essentially, the offender must stray from any type of pornography or sexual setting. If an offender decides to date or marry, the potential candidate must be notified of the offender’s criminal history (‘Rules’).

In addition to notifying the potential dating or marriage partner, a sex offender must also alert family and friends of the incident. Once a person becomes labeled as a sex offender, the neighborhood must be aware that there is a sex offender living amongst the community (‘Rules’). The public is only notified via a website they can visit if they choose, but this information should be presented to them more explicitly. There are many individuals who do not know how to use a computer. A parole officer should visit the neighbors to discuss safety protocol and other warnings. The offender’s address should be shared with all of the local residents, as well as individuals who find the offense report on the internet. Having the offender’s information online is not sufficient. In order to protect children, we must make better efforts to notify the community in a better way. Making sure that the public is aware of sex offenders in the community is crucial, and may save the lives of many children.

Sex offenders may be required to attend counseling sessions, for the duration of time determined by the court system. The offender must continue to update the parole officer to ensure proper attendance of the sessions. A polygraph may be used on the offender, if necessary. He is required to submit to the polygraph, as well as any drug tests that may be administered. With that being said, the offender must refrain from consuming alcohol or using drugs. Taking a polygraph and being drug-free are required to show that the offender is making changes in his life. Ideally, making these requests is to ensure that the offender will not sexually assault another child.

The offender cannot, by any means, contact the victim of the crime. Possible contact of the victim is one of the reasons as to why the offender cannot have a phone or a computer. Offenders cannot have any methods of communication with the victim, but the offenders still live in communities, near children. Since the offenders cannot contact their victims, it is essential that the offenders not be able to contact other innocent children. Seeking Justice in Child Sexual Abuse explains that, ‘Child abuse is one of the most difficult crimes to detect and prosecute, in large part because there often are no witnesses except the victim,’ (Staler 3). Unfortunately, many times when a minor is sexually assaulted, there are no witnesses. Having a sex offender near school districts enables more children to possibly be harmed and ultimately, there may not be any witnesses.

Through Civil Disobedience, Thoreau argues that breaking laws is sometimes necessary. Thoreau goes on to justify his argument, saying that breaking the law can often be the only thing that changes the mindsets of individuals. In a parallel example of Thoreau’s theory, we must break the misconception that having sex offenders living near children is perfectly acceptable. Change will not happen unless we, as a community, do something drastic to make a change happen (Thoreau).

Unfortunately, children are still placed in danger when sex offenders live near the school systems. In Martin Luther King Jr’s Letter From the Birmingham Jail, he makes a comment that his children are afraid of their surroundings. In today’s society, children are still afraid of their environment. Martin Luther King Jr. has the idea that one should break a law, if he or she deems it as ‘unjust.’ (King). I completely agree with King, and in this situation, I feel as though it is completely unjust to have sex offenders live near children. Ultimately, we cannot simply remove sex offenders from the communities, because they must live somewhere. But, as Martin Luther King Jr. was calm and rational in his approach, I believe that is the best method for the nation to make a difference.

Martin Luther King Jr. and Henry David Thoreau are very similar in the sense that they both want to take a stand for the people, and essentially, do what is morally right. They both agree that if a law is unjust, it needs to be broken. And both men stay determined to break the laws that they deem ‘unjust.’ Neither man is willing to give up on what he believes, yet both men face imprisonment for doing the right thing. If these men can be incarcerated for doing the right thing, perhaps sex offenders can have more severe punishments for doing horrendous acts to children (Thoreau, King).

Both of these men are true inspirations as to how we can handle our disagreements in a rational manner. I do not feel comfortable having sex offenders live near children. We cannot completely remove child molesters from our streets, but there are many other ways to reduce the amount of rapes and sexual abuse. The first possibility is that sex offenders stay imprisoned indefinitely. Yes, that is an unfortunate experience, but the children that are raped are emotionally scarred for the rest of their lives. So, maybe it would be rational for sex offenders to stay in prison indefinitely.

Another alternative may be to have a ban, where sex offenders cannot live within a certain radius of schools. Either way, a list of sex offenders will still be posted to notify the community. But, in my proposal, there will be more ways of warning everyone. These registries will be abundantly clear, even to those who may not have access to the existing lists. Not everyone has access to the internet, or knows how to operate a computer. Perhaps, in addition to being posted online like they are now, the lists will also be given to each homeowner in a more noticeable method. Advising the community is the first step in making this situation better. Maybe we cannot eliminate sex offenders from our streets, but we can take better precautions.

I believe that, in order to protect innocent adolescents, it is necessary to make a stand. We, as a community, should make every effort to ensure that children are not put into a situation where they are harmed. No child should be raped, sexually assaulted, or murdered. There are simple changes that this country can take at this very moment to ensure better safety of children. Law enforcement can improve methods of notifying the public that there is a sex offender present. Sex offenders can have a ban on how close they live to a school system, or they can be incarcerated indefinitely. Child sex abuse is a very serious issue that we could possibly eliminate, or reduce the number of victims.

Facebook as a learning platform

Abstract

The past decade has seen a growing popularity of social networking sites and out of all that is available, Facebook is the one that stands out for being unique and offering a range of user-friendly features. This site has frequently topped the ranks with record number of memberships and daily users. Facebook is often considered as a personal and informal space for sharing pictures, information, webpages, forming ‘Groups’, participating in discussions and debates, and providing comments on wall posts etc. The aim of this paper is to explore the use of Facebook as learning and teaching tool. It will highlight some of the theoretical debates and existing research to understand the effectiveness of this site as an informal and learner driven space, and ways in which it empowers students and stimulates their intellectual growth. The conclusion highlights the on-going contested nature of technological advances and its influences on traditional ideas of teaching and learning.

Keywords: Facebook; Situated Learning Theory; Community of Practice; Connectivist Approach; Personal Learning Environment; Informal Learning; Criticical Thinking; Creativity; Communicative Confidence; Collaborative Learning.

Introduction

Over two decades ago, theorists Jean Lave & Etienne Wenger (1991) introduced a theory of learning called ‘situated learning’ and the concept of community of practice (CoP from here on), so as to describe learning through practice and participation. The CoP can be bracketed as a group of individuals who share a common interest and a desire to learn from and contribute to the community. Wenger (2010) elaborated the idea by stating that:

Communities of practice are formed by people who engage in a process of collective learning in a shared domain of human endeavor: a tribe learning to survive, a band of artists seeking new forms of expression, a group of engineers working on similar problems, a clique of pupils defining their identity in the school, a network of surgeons exploring novel techniques, a gathering of first-time managers helping each other cope. In a nutshell: Communities of practice are groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly.

According to Wegner, the CoP needs to meet three essential characteristics i.e. domain, community and practice. The CoP has an identity defined by a shared domain of interest. Therefore, membership implies a commitment to that particular domain, and a shared competence that distinguishes members from other individuals (namely non-members). The community then becomes a way through which members can pursue interest in their domain, engage in collaborative activities and discussions, provide assistance to each other, and share or disseminate information. They build a co-operative relationship that enables them to learn from each other. Wegnner terms the members of a CoP as practitioners ‘ as they develop a shared repertoire of resources, experiences, stories, tools, and ways of addressing repetitive problems. This in short can be called a shared practice, which takes time and sustained interaction. It is the combination of these three components constitutes a CoP, and it is by developing these in parallel that one cultivates such a community (ibid).

Social networking sites are often seen as promoting CoP. In simple terms, these sites can be defined as: ‘web-based services that allow individuals to (1) construct a public or semi-public profile within a bounded system, (2) articulate a list of other users with whom they share a connection, and (3) view and traverse their list of connections and those made by others within the system.’ (Boyd and Ellison, 2008: 211). What makes social networking sites unique is not whether they allow individuals to meet new people, but rather that they enable users to articulate and make their social networks visible (ibid). Therefore, social networking can be seen as ‘the practice of expanding knowledge by making connections with individuals of similar interests’ (Gunawardena et al. 2009:4). Researchers have frequently concluded that social networking sites are at the core of what is described as online CoP (Watkins, and Groundwater-Smith, 2009).

According to Wong et al. (2011), growth in technology and social networking sites have contributed to an increase in the opportunity to operate in an improved learning environment through enhanced communication and incorporation of collaborative teaching and learning approaches. Amongst all the social networking sites, Facebook (FB from hereon) is the one that stands out the most. There are a number of reasons as to why FB can be used for building an online CoP and ways in which its features are considered as unique and suitable for Higher Education purposes:

1) Ability to create a ‘Group’: FB is equipped with dynamic features, such as, messaging, and ability to post videos, weblinks and pictures. However, Group is one of the most powerful features on the site, and it can encourage and enhance collaborative learning. Learners can create a Group or join an existing Group related to their interest, and they can use the site features for sharing information and performing variety of tasks. FB Group features can build an online CoP, as they meet the three fundamental components of communities (i.e. domain, community and practice). (ibid: 319)

2) Share information: FB features, such as, Groups, Chats and Docs enable sharing of information. Learners can form groups for a specific purpose, and post messages, have discussions/debates and share resources on a specific domain within the group. The members of a CoP are practitioners, and they can develop a shared repertoire of resources.(ibid:319)

3) Encourage collaborative tasks: ‘Docs’ feature on FB site can help with collaborative tasks, and it can allow Group members to work collectively (if required). Any/all group members can view, edit, add or remove sections of the ‘Doc’. (ibid:319)

While the above shows the ways in which FB can be useful in building an online CoP, a more careful analysis is required, in order to establish its usefulness as learning and teaching tool in Higher Education. Therefore, rest of this paper will draw upon theoretical debates and evidence from within the literature, so as to explain the ways in which FB could be a powerful tool ‘ one that could enhance learning and criticality amongst learners, and also boost their communicative confidence.

Why Facebook?

Created in 2004, by the end of 2013 FB was reported to have more than 1.23 billion monthly active users worldwide, and 24 million Britons logged on to the site each day (The Guardian, 2014). Due to its ease of use and availability in the form of mobile applications, FB has now become integral part of its users social lifestyle ‘ conventional estimates suggest that a typical user spends around 20 minutes a day on the site, and 2/3 of users log in at least once a day (Ellison et al. 2007). Since its creation, FB has been subjected to immense academic and scholarly scrutiny, especially for its uses within the educational settings. The initial literature largely focused on the negative aspects associated with its use, such as, identity presentation and lack of privacy (See Gross & Acquisti, 2005). It was argued that, amount of information FB users provide about themselves, (somewhat) open nature of the information, and the lack of privacy controls could put users at risk online and offline, for e.g. bullying, stalking and identity theft (Gross and Acquisti, 2005). However, constant changes made to the privacy settings have subsequently reversed these concerns. The users can control the release of information by changing the privacy settings. Issues surrounding student perceptions of lecturer presence and self-disclosure (Mazer, Murphy, & Simonds, 2007), and inconsistent patterns of use were also highlighted as potential causes of concern (Golder, Wilkinson, & Huberman, 2007). However, the positive effects of social networking tools in teaching and learning soon took precedence, as these computer-mediated communication modes are often seen as lowering barriers to interaction and encouraging communicative confidence amongst students. For instance, during a qualitative study at the Yale University, the members of staff praised FB for breaking the barriers between themselves and students, and it also encouraged students to feel part of the same academic community (mentioned in Bosch, 2009). Similarly, a study conducted by Ellison et al. (2007) explored maintained social capital, which assesses one’s ability to stay connected with members of a community. They concluded that FB usage amongst students is linked to psychological well-being, and it could especially be of benefit to students with lower self-esteem and low life satisfaction. It could also trigger a process, whereby goal attainment amongst students is significantly increased.

The above uses of FB in Higher Education and as a tool for enabling the maintenance of social capital, can be contrasted with its value as a learning environment. Selwyn (2009) has strongly cautioned against the use of FB for teaching and learning, as students might be reluctant to use it for learning purposes, shifting its focus away from being an academic tool and becoming considered purely as a site for socialisation and sharing mundane information. Selwyn presented an in-depth qualitative analysis of the FB ‘wall’ activity of nearly 1000 students in a British educational establishment, and his study offered a very pessimistic conclusion. He noted that students did not use this site for educational purposes and their interactions were limited to offering negative comments on learning/lecture/seminar experiences, casual comments about events, sharing factual about teaching and assessment requirements, seeking moral support for assessment or learning, and even boasting oneself as being academically incompetent and/or disengaged (2009:157). The evidence from this study suggests that, FB in Higher Education must be approached with severe caution and lecturers need to use it in a considered, strategic, logical and objective manner (ibid).

It is likely that FB could clash with traditional pedagogical models. Nevertheless, it can provide channels for informal and unstructured learning. For instance, Bugeja (2006:1) suggested that, social networking offers the opportunity to ‘re-engage’ individuals with learning, and promote ‘critical thinking’, which is one of the traditional objectives of education (explained further in subsequent paragraphs). Siemens (2005) connectivist approach also recognises these impacts of technology on learning and ways of knowing. According to him, learning in the digital age is no longer dependent on individual obtaining/storing/retrieving knowledge, but instead relies on the connected learning that occurs through interaction with various sources of knowledge and participation in communities of common interest, including social networks, and group tasks (Brindley et al. 2009). The shift of focus to group and network as the epicentre of learning relies on a concept of learning based on ‘exploration, connection, creation and evaluation within networks that connect people, digital artefacts and content’ (Manca and Ranieri, 2013:488). This type of learning through socialisation can foster student interest in the subject material. Duffy (2011) proposed that FB could be used for teaching and learning, as it enables students to share knowledge and information with the ‘Group’ members’ and the associations between them. Duffy (2011) further argued that FB provides a range of educational benefits by: Allowing students to demonstrate critical thinking, take creative risks, and make sophisticated use of language and digital literacy skills, and in doing so, the students acquire creative, critical, communicative, and collaborative skills that are useful in both educational and professional contexts. (p. 288). This in turn will also help to achieve the Abertay Graduate Attributes ‘ and encourage development of students’ intellectual and social capacity, give them tools to find creative solutions to real world problems, and work within a complex and interdisciplinary contexts. It could trigger intellectual, communicative and collaborative confidence amongst students, train them to take creative risks and help them broaden their knowledge base.

What is particularly fascinating about FB is the fact that it encourages a creation of Personal Learning Environment (PLE) ‘ which is an emerging pedagogical approach for both integrating formal and informal learning, supporting self-regulated learning, and empowering students intellectually (these values are also outlined in the Abertay Strategic Plan). According to Attwell (2010):

PLEs are made-up of a collection of loosely coupled tools, including Web 2.0 technologies, used for working, learning, reflection and collaboration with others. PLEs can be seen as the spaces in which people interact and communicate and whose ultimate result is learning and the development of collective know-how. A PLE can use social software for informal learning which is learner driven, problem-based and motivated by interest ‘ not as a process triggered by a single learning provider, but as a continuing activity.

PLEs are spaces for the modern learner to create, explore and communicate, and they are characterised as an approach to learning rather than a set of computer assisted applications (Dalsgaard 2006:2). The use of PLEs can help to reinforce classroom learning by extending communication outside of the classroom hours (but at the same time not creating classroom outside of the classroom), and thinking about topics beyond the weekly seminar sessions both individually and in collaboration with classmates through posting materials (like files, website links, notes etc.) and leaving comments. This type of engagement can result in the development of (informal) communities of learning. Whereas, collaborative learning can lead to deeper level learning, critical thinking, and shared understanding (Kreijns, Kirschner and Jochems, 2003). A study conducted by Churchill (2009) highlighted that ‘online-blogs’ can foster a learning community, and it makes learners feel as an important part of the classroom. The best is achieved from such blogs when they are designed to facilitate student access of course material, posting reflections on learning tasks and commenting on peer contribution. Taking into account that FB is one of the most popular network and method of community building, through which students today are communicating ‘ it can prove an useful tool in collaborative student-led learning (in prove equal or more beneficial than blogs). Downes (2007) argues that FB is distinctive when compared to other forms of computer-mediated communications because it has stronger roots in the academic community. One of the reports by the UK government body for technology in learning lists several potential uses of FB in education, and for developing communities of practice, communication skills, e-portfolios, and literacy ‘ all of which are essential aspects of the Abertay Graduate Attributes.

FB can be used not only to gain knowledge and information, but also to share information, as and when needed. McLoughlin and Lee (2007;2010) have pointed out that ‘learning on demand’ is becoming a type of lifestyle in modern society, and learners are constantly seeking information to solve a problem or to satisfy their curiosity. Learners should therefore not be considered as passive information consumers, but as active co-producers of content. This also makes learning highly independent, self-driven, informal and integral part of University life (ibid). Formal learning is described as highly structured (one that happens in classrooms), whereas informal learning happens through observation, listening to stories, communicating with others, asking questions, reflecting and seeking assistance. Informal learning rests primarily in hands of the learner and use of FB could allow learners to create and maintain a learning space that facilitates self-learning activities and connections with classmates and other academic/educational networks (ibid). However, informal learning outside of the classroom must be considered as a continuum, rather than either/or dichotomy (Attwell, 2007). The informal learning can be used to supplement formal learning (not substitute) and PLE as a pedagogical tool should be viewed as intentioned merger of formal and informal learning spaces.

PLEs are increasingly becoming effective in addressing issues of learner control and personalization that are often absent in the University Learning Management Systems, such as, Virtual Learning Environment (VLE) or Blackboard ( Dabbagh and Kitsantas, 2011). VLEs do not accommodate social connectivity tools and personal profile spaces, and they tend to replicate traditional models of learning and teaching in online environments. They create a classroom outside of the classroom, which may explain as to why educators ‘can’t ‘ stop lecturing online’ (Sheely, 2006). Also, VLEs are largely considered as tutor dissemination tools (for lecture notes, readings and assessment related information), over student learning tools. The University faculty and administrators control VLEs, and learners cannot maintain learning space that facilitates their own learning activities, and connection with and fellow classmates (Dabbagh and Kitsantas, 2011:2). When FB is employed as a learning tool, it moves away from this very hierarchical form of learning and empowers students through designs that focus on collaboration, connections and social interactions. It is much more dynamic and evolved in this sense.

It has been long argued that VLEs have had only a relatively slight impact on pedagogy in higher education, despite their commercial success (Brown, 2010). However, FB has the potential not only to fundamentally change the nature of learning and teaching but, through the creation of learner-controlled devices, it may challenge the role of traditional institutions in a way that previous technologies could not. Brown (2010:8) imposes a crucial question regarding VLE (such as Blackboard), and that it is ‘reasonable to wonder how much longer the return on investment will stand up to scrutiny’ (Brown 2010:8).

Conclusion

FB is increasingly becoming a popular learning platform that has a true potential in HE. A FB ‘Group’ can facilitate learning, by increased interaction between students and staff. The research has so far (despite being plausible in nature) indicated FB can be used to enhance the literacy, critical thinking, and collaborative and communicative skills amongst students. Some researchers have argued that social networking sites, such as, FB could offer ‘the capacity to radically change the educational system’ to better motivate students as engaged learners rather than learners who are primarily passive observers of the educational process’ (Ziegler 2007, 69). However, this overly-optimistic view is strongly contested by others, who have raised grave concerns about heightened disengagement, alienation and disconnection of students from education and to the detrimental effect that FB may have on ‘traditional’ skills and literacies (Brabazon, 2007). Academics have feared that FB could lead to intellectual and scholarly ‘de-powering’ of students, incapable of independent critical thought. According to Ziegler (2007:69), sites such as FB could lead to ‘the mis-education of Generation M’ (cited in Selwyn, 2009), and despite of its popularity as innovative educational tool, studies have indicated that it may distract learners from their studies and purely become a tool for socialisation (ibid). The use of FB remains controversial and further research is needed in this area to establish its effectiveness in HE teaching and learning.

Causes of drug failure: essay help online

One of the most common causes of drug failure is drug-induced liver injuries (DILIs). The majority of these failures are idiosyncratic reactions, which occur in small patient populations (between 1 in 1.000-10.000) in an unpredictable manner.1 The underlying mechanism of this type of DILI is very complex and still not completely understood.2 However, recent data have suggested that the crosstalk between cytokine-mediated pro-apoptotic signalling and drug reactive metabolite-mediated intracellular stress responses is essential in the comprehension of DILI.3

Various xenobiotics (e.g. diclofenac) can induce liver damage via the tumor necrosis factor ?? (TNF??) pathway. Excretion of this major cytokine will initiate through liver macrophages (Kuppfer cells) after exposure to bacterial endotoxins (e.g. Lipopolysaccharide).4 After binding of TNF?? to its receptor (TNFR1), the transcription factor nuclear factor kappa-B (NF-??B) is activated.5 In general, NF-??B is detained in the cytoplasm by binding to an inhibitor of ??B (I??B) complex. The initiated NF-??B leads to activation of I??B kinase (IKK), which eventually leads to the ubiquitination and phosphorylation of the I??B complex.6 Subsequently, this complex is targeted for proteosomal degradation. Hereafter, NF-??B translocates to the nucleus in an oscillatory way and activates the transcription of several genes which primarily encode survival proteins, such as cellular FLICE-like inhibitory protein (c-FLIP), inhibitor of apoptosis proteins (IAPs) and negative regulators proteins (e.g. A20, I??B??).7 After protein synthesis, A20 and I??B?? will inhibit the function of NF-??B in a negative feedback manner (Figure 1). Modified TNF??-induced NF??B translocation by various compounds is believed to shift the balance between cell survival and cell death.

Furthermore, reactive compound metabolites are capable of altering cellular molecules, which could lead to intracellular disturbances and eventually to the induction of various stress response or toxicity pathways.8 These pathways, combined with a decreased response for cell damage recovery and protection, will enhance the susceptibility to cell death of various cells. Up to now, insufficient studies have been performed to investigate the contribution of various pathways to DILI. It still remains uncertain which drug-induced toxicity pathways modulate the pro-apoptotic activity of TNF?? signaling in DILI reactions. However, there are different stress responses which are most likely involved in the formation of DILI. The Kelch-like ECH-associated protein 1 (Keap1)/nuclear factor-erythroid 2 (NF-E2)-related factor 2 (Nrf2) antioxidant response pathway and the endoplasmic reticulum (ER) stress-mediated unfolded protein response (UPR) have been studied in drug-induced toxicity of hepatocytes [2]. The Keap1/Nrf2 pathway is essential in recognizing ROS and/or cellular oxidative stress [6]. Keap1 maintains Nrf2 in the cytoplasm and guides it toward proteasomal degradation under normal circumstances. Nrf2 signaling is important in the cytoprotective response against ROS, but its role in the TNF??/drug interaction in idiosyncratic DILI remains unclear.

Furthermore, the ER stress-mediated UPR is a stress response due to enhanced translation and/or disturbed protein folding. Should the modification fail, a pro-apoptotic system will be initiated to eliminate the injured cell. The exact mechanism and role of the ER stress signalling response in managing DILI in relation to TNF??-induced apoptosis still remains unclear.

In this research, we hypothesize that stress response mechanisms (e.g. ER stress responses, oxidative stress responses) are involved in the delay of NF-??B nuclear translocation upon exposure to various NF-??B nuclear translocation compounds.

In this project, a human HepG2 cell line will be used to study the interaction between five different compounds (amiodarone, carbamazepine, diclofenac, nefazodone, ximelagatran) and cytokine TNF alpha. To investigate the overall percentage of cell death, a lactate dehydrogenase (LDH) assay will be performed. Furthermore, in order to quantify the amount of apoptotic cells, an Annexin V affinity assay will be executed. It is expected that the concentration-dependant toxicity of the compounds is enhanced with the presence of TNF??. Live cell imaging with HepG2 GFPp65 cells will be used to follow the NF-??B translocation after exposure to the five various compounds. Subsequently, an automated image quantification of the p65 signal intensity ratio of nucleus/cytoplasm is measured to show the exact onset of the second nuclear entry of NF-??B. It is estimated that the data of the NF-??B translocation will show a compound-induced delayed onset of NF-??B.

The activation of NF-??B target genes cIAP and c-FLIP will be measured using a Western Blot analysis. Moreover, the negative regulators of NF-??B, A20 and I??B??, will be studied to investigate the negative feedback loop of NF-??B. We anticipate that the data of the Western Blot analysis will show a decrease in production of the investigated target genes, because of the reduced TNF??-induced NF-??B transcriptional activity.

Ultimately, a data analysis will be applied on the results using t-test or two-way analysis of variance (ANOVA) in case of multiple comparisons.

Karma by Kushwant Singh

The text ‘Karma’ is written by Kushwant Singh in 1950 who is a English novelist.

The short story is 65 years old today but it is still relevant today, many of the issue that the text show.

The story deal with problems of the Indian cultures. Novel tells us the impact the empire have had on India, and shows us that the British norms have had influence on India.

It shows us that there is a big clash between women and men in India, and the way that men looks at women, but also the clash between rich and poor, is very big, in the story men and women does not sit in the same side of the train.

The text take place in a train.

And we have main character who names is Sir Mohan Lal, he is an Indian Man, and he think self he is very handsome and beautiful like the English men. He actually think of himself as an Englishman.

He think he is better than he Indians.

He despratly trys to fit in with the Englishmen.

Sir Mohan is very well iducated his job is a vizeier and barrister, he has been in England to stody, and maybe that is the reason that he thinks of himself as an Englishman. He think he is a good looking man, a time in the story he looks in the mirror ‘Distinguished, efficient – even handsome. That neatly trimmed moustache – the suit from Savile Row, the carnation in the buttonhole.’ It shows that he is proud of himself, and he knows which image he want to send to other people, but also that he only speaks to himself.

Sir Mohan Lal obsessed with how other people think of him. He will do anything to get to know an Englishman. In the train he meets many Englishmen and he always have an old copy of ‘The Times’ which shows how desperately he want to get in touch with an Englishman. And also that he think he as a well education, and also to show that he is a man of manners and English culture. He feels like he is an English man and not an India, he think that Indian people is poor, and not like him. He will not being seen with some of them, and also his wife would he not been seen with.

In the short story we also meet his wife, which is an Indian women, he doesn’t love her and think she is ugly, the only reason he is married to her is because he want to have children.

This shows us the problematic we have reading in the class, were many married has been arranged, and that the people there is married doesn’t love each other. Sir Mohan Lal makes her travel in the zenana(a section in the train only for women).

In the train Sir Mohan Lal meet two English soldiers, who he wants to travel and talk to them, that he tell the guard that they could sit in his coupe. Mohan should never had does that. The men were not looking for an Indian man to talk with, and they sees themselves as better than Sir Mohan Lal. Just like he had done before with the Indians people, then he could see how it feels, to not be an excepted person.

Karma is when something you have done comes back to you and it certainly does.

Human Resource Management and Employee Commitment: essay help

The concept of employment commitment lies at the heart of any analysis of Human Resource Management. Really, the rationale for introducing Human Resource Management policies is to increase levels of commitment so positive outcome can result. Such is the importance of this construct. Yet, despite many studies on commitment, very little is understand of what managers mean by the term ‘commitment’ when they evaluate someone’s performance and motivation. Development of organizational commitment is basically by major theoretical approaches emerge from previous research on commitment: Firstly, commitment is view as an attitude of attachment to the organization, which leads to particular job-related behaviors. The committed employee, for example, is less often absent, and is less likely to leave the organization voluntarily, than are less committed employees.

Secondly, one line of research in organizations focuses on the implications of certain types of behaviors on subsequent attitudes. A typical finding is that employees who freely choose to behave in a certain way, and who find their decision difficult to change, becomes committed to the chosen behavior and develop attitudes consistent with their choice. One approach emphasizes the influence of commitment attitudes on behaviors, whereas the other emphasizes the influence of committing behaviors on attitude. Although the ‘commitment attitude behavior’ and ‘committing behavior attitude’ approaches emerge from different theoretical orientations, and have generated separate research traditions, understanding the commitment process is facilitated by viewing these two approaches as, inherently, inter-related. Further by virtue of commitment the human recourse management department can fully utilized their talent, skill, and efficiency of the employee in productive way to fulfill the personal goals of the employees and organizational goals. More over commitment helps in fulfilling the purpose of training imparted to the employees because in spite of increasing the level of skill through training without commitment these cannot be maintained. After all existence of adequate commitment amongst employees create an work culture environment and there by all employees can be motivated and encourage towards the excellent performance of their duties.

3.5 Social Support ‘ its Concept, Purpose, Types, Relations with Social Network and social Integration

3.5.1 Concept of Social support

The concept of social support is strategic which defined as the belief that one is cared for and loved, esteemed and valued. It is a strategic concept in not only giving understanding to the maintenance of health and the development of (mental and somatic) health problems, but also their prevention. Types and sources of social support can vary. Four main categories of social support are (i) emotional, (ii) appraisal, (iii) informational and (iv) instrumental support. Social support is closely related to the concept of social network, the ties to family, friends, neighbors, colleagues, and others of significance to a person. Within this context, social support is the potential of the network to provide help.

It is important for organizations to collect information on social support in the employees, to enable both risk assessment and the planning of preventive interventions at different level such as:

a) Lack of social support increases the risk for Organizational Commitment:

Lack of social support is shown to increase the risk of both mental and somatic disorders, and seems to be especially important in stressful life situations. Poor social support is also associated with enhanced mortality. Social support may affect health through different pathways i.e. behavioral, psychological and physiological pathways.

b) Social support is determined by individual and environmental factors:

Social support is determined by factors at both the individual as well as the social level. Social support in adulthood may be to some extent genetically determined. Personality factors that might be associated with perceived social support are interpersonal trust and social fear. The position of a person within the social structure, which is determined by factors such as marital status, family size and age, will influence the probability of them receiving social support. The occurrence of social support depends on opportunities that an organization creates to commitment with employees.

c) Preventive interventions stimulate social support at different levels:

There are three types of preventive interventions aimed at stimulating social support: universal, selective or indicated interventions. The ultimate goal of universal interventions is to promote health. They are aim at providing social support at the group or community level. Selective preventions aim to strengthen social skills and coping abilities with, for example social skill training. Social support groups and self-help groups are other examples of selective prevention programs. Indicated prevention programmes aim to reduce the risk of people who already have symptoms of psychological stress, developing a mental disorder.

Social support is defining as help in difficult life situations. Social support is a concept that is generally understands in a spontaneous sense, as the help from other people in a difficult life situation. It is social support as ‘the individual belief that one is cared for and loved, esteemed and valued, and belongs to a network of communication and mutual obligations’. In spite of these widely accepted definitions of social support, there are very few consensuses in the literature about the definition and consequently the operation implementation of the concept. There is a need for further research, especially about what kind of support is most important for organizational commitment. Researcher tried to the applied social support score is the sum of the raw scores for each of the items. In the Guwahati Metro region, the sum-score of the Social Support Scale ranges. A score is classified as poor support, intermediate support and strong support.

3.5.2 Purpose of Social Support

Researcher thinks that in defining social support the qualities of support perceived (satisfaction) and provided social support for the managerial employees are significant here. Most of studies are constructed on the measurement of subjectively perceived support, whereas others aim at measuring social support in a more objective sense. One could also distinguish between the support received, and the expectations when in need, and between event specific support and general support. The definition in terms of a subjective feeling of support raises the question whether social support reflects a personality trait, rather than the actual social environment (Pierce et al., 1997). Most researchers will agree that the person as well as the situation affects perceived social support, and that the concept deals with the interaction between individual and social variables. In the present study researcher has tried to observe percentage of male and female managerial employees with poor support, intermediate support, and strong support in Public and private organizations of Guwahati Metro region.

3.5.3 Types of Social Support

Types and sources of social support may vary. Mainly four major categories of social support such as emotional, appraisal, informational and instrumental are in the use of research work. Researcher tried to observe it in her study.

a) Emotional support generally comes from family and close friends and is the most commonly recognized form of social support. It includes empathy, concern, caring, love and trust.

b) Appraisal support involves transmission of information in the form of affirmation, feedback and social comparison. This information is often evaluative and can come from family, friends, coworkers, or community sources.

c) Informational support includes advice, suggestions, or directives that assist the person respond to personal or situational demands.

d) Instrumental support is the most concrete direct form of social support, encompassing help in the form of money, time, in-kind assistance, and other explicit interventions on the person’s behalf.

3.5.4 Social Support & Concept of a Social Network

Social support is closely related to the concept of a social network, or the ties to family, friends, neighbors, colleagues, and others of significance to the person. However, when the social network is described in structural terms, like size, range, density, proximity and homogeneity, social support normally refers to the qualitative aspects of the social network within this context, social support is the potential of the network to provide help in situations when needed. However, the social network may also be the cause of psychological problems.

Halle and Wellman present the interplay between social support, the social network, and psychological health in a model: The social network as a mediating construct. This model shows that social support can be seen as resulting from certain characteristics of the social network, which are in turn caused by environmental and personal factors. The model suggests that it is important to distinguish between the structural and quantitative aspects of the social network on the one side, and social support on the other. In this study researcher has correlated stress and social support with organizational commitment taking in to consideration managerial employees of Public and private sector in Guwahati Metro region.

3.5.5 Social integration and Social Support

Whereas the concept of social support mainly refers to the individual and group level, the concept of social integration can refer to the community level. A well-integrated community refers to well developed supportive relationships between people in the community, with everybody feeling accepted and included. A related concept is social capital, which is often used as the sum of supportive relationships in the community. Social capital may, however, also be used in a somewhat different meaning, such as solidarity’. It is an important for the development of organizational commitment.

In the fields of Organizational Behavior and Industrial/Organizational Psychology is, in a general sense, the employee’s psychological attachment to the organization. It can be contrasted with other work-related attitudes, such as job satisfaction, defined as an employee’s feelings about their job, and organizational identification, defined as the degree to which an employee experiences a ‘sense of oneness’ with their organization. Nobel laureate Amartya Sen said that the sense of oneness in every individual should he ‘dynamic’ and not confined within the narrowness of a single identity. People have to judge contextually as to what oneness means in several aspects of our life. A person cannot have just one identity of oneness based on one’s nationality or religion.

Encompass the systematic study and careful application of knowledge about how people act within organizations. Organizational studies sometimes are considered a sister field for, or overarching designation that includes the disciplines like industrial and organizational psychology, organizational behavior, human resources, and management.

However, there is no universally accepted classification system for such subfields. Beyond this general sense, organizational scientists have developed many feelings especially in creative expression of organizational commitment; the present study is combination of the higher level employees stress and social support, which effects on organizational commitment. Researcher have selected Guwahati Metro region for their study. The study is design based on types of organizations i.e. Public and private organizations.

Climate Effect On Building facade: essay help

Abstract : Building facade is one of an important element of the architecture. It have a significant effect on energy conservation and the comfort of the building users. The facade is affected by the environmental conditions and it designs should take into consideration the climate of it regions this research will explain the facade treatment on different region, also the Basic methods for designing high-performance building facade it will explain two case studies that illustrate facade design methods for two different climate conditions.

Content

1. Introduction ”””””””””””””….. 3

2. Literature Review””””””””””””’ 4

3. Research discussion and data analysis””””””””’ 5

3.1. Design criteria For Mixed Climate”””””””’.. 5

3.2. Design criteria For Hot Climates””””””””.8

4. Conclusion””””””””””’..”””11

5. References”””””””””””””’11

1. Introduction

Climate is always affect our daily life ether if it’s sunny ,cloudy ,rainy it have an Influences on our sense of comfortable when we go outside the building. When we are inside the building, the building separate use from the outer environment and. It have it own conditions depend on the technology inside the building such as , HVAC systems which allows us to change the temperature or humidity’etc . Building protects us from the Weather problems that are not favored to stay out in it. Building interior spaces conditions also depends on the exterior facade treatment For example the interior heat or lighting that comes through the glazed windows will affect the temperature of the interior.

This research will explain the influence of the climate on the building facade , what is the main factor that affect the facade of the architecture on the other hand ,the techniques of treatment the facade to provide a suitable interior environment for it users in cretin climate condition, also how can we design the facade in simple way to fit with the changing in the climate , and facade materials selection to help in adaptation the building to the climate conditions.

2. Literature Review

Across the history Human used the shelter to protect them from danger Such as wailed animals and Climatic conditions. Later on with the evolution of human the dwellings has developed after it was a cave in Mount it became a building in various forms and functions. Buildings provide the foundation for our daily activities, for example ,educational ,commercial , Health care ‘. Etc.

Climate is generally the weather conditions of a region, as temperature, air pressure, humidity, precipitation, sunshine, cloudiness, and winds, throughout the year, averaged over a series of years (n.d, The American Heritage?? New Dictionary of Cultural Literacy). Every region have it own climatic characteristics that can affect the architecture facade differently, for example In warm areas like middle east region, people avoid the glare and the heat of the sun, as demonstrated by the decreasing size of the windows. On the other hand in north Europe they use glass in Exaggeration way to allow the sun light to inter the building and heat the interior space because of the cold weather of their region (””””” 2010).

3. Research Discussion And Data Analysis

facade is generally one exterior side of a building, usually but not always, the front side of the building(n.d, 2011). The building facade acts as a skin that wraps around the building and affects the internal environment as it interacts with the external one. Building facades is not only about the aesthetic of the building, it’s also perform as the barriers that separate a building’s interior from the external environment. facades are one of the most Important contributors to the energy consumptions and the comfort norms of any building. facade designs and performance are one of the main factors for sustainable, energy-efficient, and high performance buildings. A facade should satisfy the design as well as the functional requirements .The Climate of the area plays a major role in designing the facade, different design strategies are required for different climatic zones. One of the traditional way to deal with the climate in the Middle East the use of small opining and Mashrabia or (Roshan) to cover the windows. this techniques that characterized the facade in this region were use to prevent the heat to enter the building and to Imprisonment the cool inside the building, also to filter the air from the dust associated with it (Mady, 2010).

3.1. Design Criteria For Mixed Climate

the Center for Urban Waters is a Public laboratory building, in Tacoma, Wash. A Tacoma is in a region with a mixed marine climate. Designed by Perkins+Will and got LEED Platinum award.

Figure 1 shows average daily temperatures and the solar radiation for each month.

This temperature of this climate zone allows cooling by natural ventilation, and the quite soft winters with low solar radiation .This climate conditions using a reasonable amount of glazing on the south and west orientations will not have a negative affect a building’s energy performance.

This view of the building is the west and south facade. It shows the differ??ent treatments for different building sides.

– The west facade consists of an aluminum cladded rain screen system, with integration of win??dows that some of it operable and non operable, and exterior blinds.

– The south facade consists of a curtain wall of fritted glass and external hori-zontal shading devices.

It is located in industrial waterfront on a long narrow site. The building program element located according it’s possible needs of air and natural ventilation. The waterside of the building provides a fresh cold air which is idle foe ventilation, so the designer placed offices on the waterway to provide a good ventilation. On the road and industrial side the opportunities of fresh airs is reduced so the designer placed the laboratories on this side because of it need of mechanical ventilation.

The shading strategies used based on the facade orientation. The western orientation of the building receives the greatest solar heat gain so it designed with a low window to wall ratio, vertical Shading devices used to moderate the solar heat gain and glare from low afternoon sun. the south facade consist of a curtain wall that provide clear views to the waterside, while horizontal shading devices obstruct the solar heat gain. The north facade mainly consists of solid elements and minimum amounts of glass. This design approach improves thermal resistance , limiting the heat transmit from exterior to interior environment. The rain screen on the east facade are made of horizontal corrugated metal panels faces the industrial side. It covered the upper half of the 2nd and 3rd level with small win??dows opining on the corrugated metal screens. These aluminum screens help to manage the early morning sun and reduce it poten-tial glare, on the other hand maintaining of the exterior views and maximizing natural day lighting of the interior spaces. It uses natural ventilation to decrease the building’s energy loads, also control the amount of natural ventilation through the Operable windows.

In summary the center for urban water designed consist of many sustainable elements not only in the facade but also in the roof system sewage and mechanical system , see building section on (Figure 3).These sustainable systems will rise the building performance and suitable the real-time energy use(Aksamija, 2014).

3.2. Design Criteria For Hot Climates

The University of Texas at Dallas. It’s a Student Services Building located Texas ,USA. It’s in a hot climate region. Designed by Perkins+Will and got LEED Platinum award.

Figure 4 shows annual average daily temperatures in rela??tion to thermal comfort zone and the available solar radiation.

In designing the facade of this building, the main con??cern was the hot climate conditions, because In this region the climate is usually hot and sunny at the summer session ,while the other sea??sons are relatively mild.

The longer sides of the rectangular form building is facing north and south orientations. All sides of the build??ing are covered by a curtain wall. Add to that the shading devices which supported by the curtain wall are wrapping the east, west, south, and small part of north facade. The shading system consists of horizontal terra-cotta louvers and vertical stainless steel rods (Figure5). The shading devices are distributed around the building creating an asymmetrical pat??tern over the building facades however, the terra-cotta shading element is important for reduc??ing solar heat gain in summer hot climate.

In the interior of the Building there are three internal atriums pro??vide daylight to interior spaces (Figure6).

The location of the lobby is on the east side of the building in one of the atriums, it provide natural day light and limit the gaining of the heat.

This design strategy is suitable for hot climates regions, especially when reducing solar heat gain while providing a natural daylight for the interior spaces. The arrangement of shad??ing devices along the facade and internal atriums is an ideal for providing a natural daylight. Almost all of the spaces in the Building have views to the outside. The building also contains other sustainable design strategies which improves the energy efficiency and the comforts interior spaces (Aksamija, 2014).

4. Conclusion

Design the facade is important because it’s the connection between building exterior and interior. Architect has to take in consideration the building’s location and climate to make a high performance facades and to provide a sustainable and com-fortable spaces for building occu??pants, also significantly reducing a building’s energy consumption. Strategies differentiate from each other depending on the geographical and climatic regions, so criteria that work best in hot climates are different from those in hot and humid or cold regions. Architect should know the characteristics of each climatic condition and location as well as the program and function requirements to create a sustainable facade fit to it environment.

Online Behavioral Advertising (OBA)

In order to understand where online privacy concerns of consumers origins from it first need to be noted what OBA is and what is the main mechanism behind it. It is of great importance to note that this main mechanism behind OBA are cookies. These cookies in accordance cause privacy concerns among consumers.

1.1 Online behavioral advertising

Online advertising is the provision of content and service for free, from the website publishers to the website visitors. In this case advertisements are aimed at everyone visiting their websites (networkadvertising.org, 2012). However, there is a type of online advertising specifically aimed at providing tailored advertisement content to a specific customer. This type of advertisement is known as Online Behavioral Advertising. Online behavioral advertising is the practice of gathering information regarding someone’s activities online. This data is used in order to determine which form and content to display to the web site visitor (McDonald & Cranor, 2009). This practice provides advertisements on the websites the individual visits and make them with the collection of their content relevant to their specific interests (Leon et al., 2012). When they consequently visit a website which correlates with their specific interests, suiting advertisement will be provided.

Consumers can control OBA by the application of tools, including those concerned with self-regulatory programs. If these tools are applied appropriately, the consumer could reach more control of self-disclosure. Tools to control OBA are for instance op-outs tools, built-in browser settings, blocking tools (Leon et al., 2011). Tools such as Do Not Track headers to websites show a message that the website visitor does not prefer to be tracked. Opt-out tools on other side, create the ability for the user to set opt-out cookies for multiple advertising networks. The issue that arises with the latter case is that if a consumer chooses to opt-out, the network of the establisher will discontinue to show customized advertising but on the other hand will keep tracking and profiling the website visitor (Leon et al., 2011). The continuation of tracking and profiling website visitors has caused considerate privacy concerns among consumers. This situation shows high correlation with the case of NPO. NPO didn’t make the consumer aware of an opt-out option even before using an opt-out option, which is expected to create even more privacy concerns (B. Comb??e, 2013).

1.2 Cookies

The most important feature of OBA is the utilization of cookies. Third-party HTTP cookies are the main mechanism used for online tracking. In comparison to first party cookies, which are located by the domain the website user is visiting. Third party cookies are visited by a different domain such as an advertising network. Other cookies such as flash cookies and HTML 5 (local storage) continue to stay on the user ‘s PC even if the website visitor deleted cookies or change browsers (B. Krishnamurthy and C. Wills, 2009;M. Ayenson et al., 2011 and M. Dahlen and S. Rosengren, 2005).

Cookies are directly linked to OBA because as earlier explained OBA uses third-party cookies to provide customized advertisements. A cookie is a small document of signs in the form of numbers and letters. For example: lghinbgiyt7695nb. The computer provides the cookie an unique code. These signs are downloaded on an individuals’ web browser when they access most websites (Zuiderveen Borgesius, 2011). Cookies enable websites to notice them whenever they return back to a website. Only the server that sent the cookie can read and therefore utilize that cookie. These cookies are vital in order to offer a more customized experience. (youronlinechoices.com, 2015).

1.2.1 Types of Cookies

There are different types of cookies. The most important cookies relevant to this research are discussed. The selection of cookies are derived from the cookies used by NPO. There are 2 different categories of cookies. First party cookies are cookies which make sure the website functions optimally. The behavior of the website visitor is tracked within one website, the website the consumer visits. Third party cookies on the other hand, are placed by third parties, in order for the website to be analyzed by google analytics. This type of cookie makes sure the website visitor will receive customized advertisements (Zuiderveen Borgesius, 2011 ).

First party cookies (npo.nl, 2015):

‘ Functional cookies: Cookies that make the website functioning as it should. These cookies keep track of the web site visitors’ preferences and memorize the individual previously visited the website.

Third party cookies (npo.nl, 2015):

‘ Analytics: Cookies to measure utilization of website.

‘ Social media: Cookies to share the content of the NPO website through social media. The video’s and articles opened on the website can be shared through buttons. To make these buttons function, social media cookies are used by different social media parties. This in order for them to recognize the website visitor whenever it wants to share an article or video.

‘ Advertisement cookies: Cookies to show Star- adverts. These advertisements are placed by the website owner or third parties on the website of the website owner.

‘ Recommendations: Cookies to make more suitable recommendations. The NPO wants to make suggestions to website visitors on other program’s for consumers to watch online.

The main information these cookies store are:

‘ Keeping track of visitors on their webpages

‘ Keeping track of time it spends on its visit

‘ What are areas the website should take notice of in order to improve

‘ Keeping track of the order of visits of different webpages within the website

If this information is gathered, this data can be added to the existing profile information. In time third parties will be able to create a personal profile of the consumer, even though there is no name attached to it. Today third-party tracking is subject to privacy debates (Zuiderveen Borgesius, 2011 ). Consumers can feel invaded in their privacy if they suspect digital marketers from creating a personal profile, by gathered information from consumers visiting websites. Third party tracking and consumer privacy get a significant amount of attention from the government and consumer protection (Zuiderveen Borgesius, 2011 )

1.2.2 Cookie use by marketers

Since the law is updated continuously on privacy regulations and there is no uniform law concerning privacy of consumers marketers are recommended to weigh out the benefits of using practices that are not 100% conform privacy regulations against the financial and risks on their reputation that comes along with this consideration. (Chaffey & Ellis-Chadwick, 2012; Zuiderveen Borgesius, 2011)The organization must inform the website visitors properly the reasons and the procedure of data collection. The marketers’ website needs to provide its visitors with information on how they will make use of a website visitors’ data . Next to that, the consumer has to give consent for the utilization of consumer data. The figure below, indicates the issues that should get considerate attention when a data subject is informed by how his/her data will be utilized. These issues are described below the figure.

Figure 1. Information flows that need to be understood for compliance with data protection legislation.

Source: D. Chaffey and F. Ellis-Chadwick, Digital Marketing, 2012, p. 163

‘ Whether the consumer will receive future communications

‘ Whether the data will be passed on to third parties with consent explicitly required. Referring to section 2.1 on privacy and the recommendation section, on privacy issues regarding NPO, it can be obtained that the NPO didn’t comply with explicit ‘consent’ from the website visitor which caused their bad publicity.

‘ The length of data storage. Referring to the models in section 2.3 confidence, knowledge and control are major indicators on consumer behavior regarding OBA.

According to marketingsherpa.com (2011) A business making use of OBA has to know whether it properly understands its application. It is important to adopt an ‘cookie audit’. A cookie audit is the principle of understanding the types of third-party tracking systems that are available and which are located on the browser of consumers when they visit the company’s website. This is important since third-party tracking can cause deceleration on a company’s website. Next to that, information obtained from customers can leak out to even unknown companies.

Furthermore, it is important to clearly give website visitors the option to opt out and to provide them with information on any form of tracking. First the website visitor needs to be aware where the website is about. Secondly the consumer need to be provided with information about the substance of the ads. Last the website visitor should get the ability to learn more about how to opt-out.

An opt out means a company will discontinue collecting and utilizing information from different web domains for the aim of providing personalized based advertising from data gathering using third party cookies in OBA. However it should be noted to the website visitor that opting out does not specifically mean they will cease receiving online advertising. The website visitor will continue to receive advertisements but not tailored to their specific preferences. (networkadvertising.org, 2012; youronlinechoices.com, 2009). Some companies make use of flash cookies. These cookies make regular cookies come to life again after the website visitor has deleted the cookies. The new cookie will get the same code as the web site visitor has removed (Soltani, 2009).

In addition it is of great importance to give website visitors the control of their data. 67% of the website visitors entrust transparent brands more. This confidence makes the chance of purchase 36% more likely than if a brand is not transparent. Companies that do not obey regulations regarding privacy also showed decreases in turnover. (Brown, 2009). Furthermore it is important to take measures for website visitors to manage cookie tracking and privacy. The website visitor should very easily know what the purpose if of the data obtained from them. As earlier explained they should also have the quick option to opt-out. (marketingsherpa.com, 2011)

1.2.3 Drawbacks cookie use

Netscape Navigator, the first successfully implemented web browser, introduced cookies. Version 1.0 of the web browser was introduced in 1994. In Netscape 1.0 cookies where introduced. (Turnbull, 2013). Even though the cookies are introduced almost 20 years ago, until recently two thirds of the samples used in research are not even able to explain what a cookie actually is. Even up to now customers believe more data is collected from them than is the case. Next to that consumers do not understand who are involved and how these companies are involved in OBA. Neither there is a understanding of technologies present (Ur et al 2012).

Next to that, the majority of web users don’t know about opt out cookies. Even nowadays the perception still exists it can be done through turning to their web browsers or delete cookies.(Ur et al., 2012). However if the website visitors are aware that if they have the ability to opt out and gain more knowledge on privacy matters, visitors feel more positive about the application of OBA by businesses (McDonald & Cranor , 2008) . If consumers do not understand their rights on privacy, they are pre-biased on this matter. This issue will be discussed further in chapter 2. If organizations easily and properly inform website visitors on their privacy rights they can possibly break through this pre-assumption. (McDonald& Cranor, 2008 and 2009)

In addition, the icon for opt-out options demonstrated in section 2.1, is subject to discussion whether the aim of this icon is reached. According to critics the meaning of this icon is not known by consumers, therefore opt-out possibilities are perceived as difficult. (‘Volg-me-niet register is wassen neus’, 2011).

Furthermore, according to marketingsherpa.com (2011) consumers should be better informed about opt-out opportunities in order to take away uncertainty of privacy matters. The privacy issues that are involved as partly discussed above will be further analyzed in chapter 2 and with the assistance of models the effects of privacy matters on consumer behavior are analyzed.

Besides, consumers complain they find privacy important but ease of use as equally important. They are annoyed by the question they are asked continuously regarding accepting the use of cookies (B. Comb??e, 2013). Next to that consumers complain about websites which place a cookie wall which makes it only possible to enter the website if the use of cookies is agreed upon.

2. How do consumers react to current privacy concerns in OBA?

2.1 Privacy

Privacy is defined as a moral right of having the possibility to prevent intrusion into someone’s personal information. Nowadays, privacy is of high importance to consumers with increasing technology increasing possibilities to more enhanced practices in identity theft, such as hacking or just invasion of consumers’ online privacy practices. By gathering personal information of consumers with the use of earlier explained cookies, the degree of customization can highly increase. (Chaffey & Ellis-Chadwick, 2012)

2.1.1 Root of privacy concerns online

In Europe the legal framework concerned with online behavioral tracking is regulated by the European Data Protection Directive. These regulations enclose gathering, processing, filing and transmission of personal information. Next to that the European e-Privacy Directive mainly regulates privacy of data and the use of cookies. This regulation made third parties placing cookies apply a regulation to give website visitors the ability to opt-out. This gave web site visitors the chance to reject cookies. Consequently, websites provided information on how to opt-out or reject cookies.

J. Zuiderveen (2011) did research on to what extent practice is complying with data protection directives on ‘permission’: a willingly, specific, based on information volition. Research has shown that the processing of personal data cannot be based on article 7.b data protection directives: there should be a positive agreement. There is no form of agreement if consumers are not aware of exchanging personal information in turn for a service. Next to that collection of personal information can neither be justified by article 7.f which states that the interests of third parties are important, unless the privacy of the concerned is invaded. Privacy interests also means that the right on privacy is a significantly important right. By following online behavior of web site visitors, Dutch companies cannot refer to these 2 articles. However in 2011 article 2.h came to attention which states that with unambiguous permission the website is not allowed to make to quick assumptions that the website user give permission to make use of personal information (European commission, 2003; 2006). This latter was specifically the case with NPO as described in the introduction. They explicitly did not asked for permission before collecting data.

Even though policies on cookies are changing continuously, it is important to describe how consumers are up dated on getting more insight into their privacy rights and consequently what effect the extent of privacy has on consumer behavior discussed with models in chapter 2.3.

Components consumer update on privacy (iab.net, 2015):

‘ Advertising option Icon : This icon will represent that the form of advertising is supported by a self-regulatory program. If the consumer clicks on this icon it will be provided with a disclosure statement concerning data gathering and where the information is used for and a simple opt-out system.

‘ Consumer choice mechanism: At AboutAds.info consumers are provide with information on how to opt out.

‘ Accountability and enforcement: Since 2011, DMA (Direct marketing association) and CBBB employed technologies to provide website visitors with information on a company’s transparency and control purveyance.

‘ Educational programs: Businesses and consumers will be educated on opt-out options and thus self-regulatory systems.

For now self-regulatory systems are opt-outs with the future possibilities of opt-ins. These mentioned components above all provide consumers with more information on opt-out possibilities. According to privacy concerns this self-regulatory systems proofs that consumers should be educated about opt-out options. Privacy regarding personal information using cookies needs considerate attention. Previous research has shown that if consumers have the perception their privacy is invaded they consider it as invasive and obstructive. Therefore, it is important for companies to be transparent. (Goldfarb & Tucker 2011). Even though advertisement becomes more personalized web site visitors do feel uncomfortable with companies tracking their online affairs. (Beales, 2010; Goldfarb & Tucker 2011).

2.2 Statistics

With assistance of statistics it will be analyzed in which area the problems of consumers and their privacy occur. If this is obtained, with the application of multiple online behavior models in section 2.3 , the problem areas can be theoretically analyzed in order to come up with a decent recommendation on how consumers actually are behaving and how marketers can respond to this.

(TRUSTe, 2008) Areas of consumer concerns regarding to online privacy in OBA:

Advertising relevance:

‘ Of 87% respondents, 25% of the ads were actually personalized.

‘ 64% would only choose to see ads of online stores they are familiar with and trust.

‘ 72% find OBA intrusive if it’s not to their specific needs.

Awareness of OBA:

‘ 40% are familiar with OBA and a higher percentage knows of tracking. 71% knows their browsing data is gathered by third parties.

Attitudes toward OBA:

‘ 57% say they are not comfortable with collecting browsing history for customized advertising.

‘ 54% state they delete their cookies 2-3 times monthly.

‘ 55% are willing to get customized online ads in order by filing in an anonymous form. 19% did not. 37% would still fill out a form about products services and brands to buy even if they aren’t held anonymous.

‘ 40% of participants in our online study agree or strongly agree they would watch what they do

online more carefully if advertisers were collecting data. (McDonald & Cranor, 2010)

Intent to take measures:

‘ 96% want to take measures on protecting their privacy settings. However respondents don’t state they don’t want to be part of OBA at al. even 56% won’t click to reduce unwanted ads. And 58% would not register in the don’t-follow-me registration.

From these statistics it can be obtained that the majority of respondents of this study have negative attitudes towards privacy matters in OBA. However referring to the first heading advertising relevance and the last heading; intent to take measures, it could be stated that the majority of consumers do prefer some form of OBA. This implies cookies are needed. Therefore the problem area as earlier discussed lies more in that consumers do not know enough about opt-out and are not confident with privacy statements. Therefore knowledge and trust will be the major factors to be analyzed in order to see how companies can overcome this issue.

These factors which will be analyzed using models are of great importance. This because TRUSTe states that knowledge and trust are great factors influencing online behavior since there is an increased level of awareness that website visitors are being tracked, to be provided with customized advertisements. Even though they are aware that they are anonymous because their name is not obtained (google.com, 2015; J. Zuiderveen 2011) they do not feel comfortable with them being followed and targeted. Therefore website visitors strongly prefer to limit and have more control on OBA practices. (TRUSTe, 2008).

2.3 Models concerned with consumer behavior

2.3.1 Knowledge: Consumer Privacy States Framework

In order to assess to what extent consumers consider their privacy as important and what are the factors that influence this degree, the use of a Consumer Privacy States Framework will be applied. This framework is derived from the Journal of Policy & Marketing and established by G. Milne and A. Rohm. According to G. Milne and A. Rohm, this framework focuses on 2 dimension. The dimensions of this framework are a reaction to consumers privacy concerns and their willingness to provide marketers with their personal information (Sheehan & Hoy 2000; Milne & Rohm 2000). These dimensions are awareness of data collection and knowledge of name removal mechanism.

According to this model privacy is only present in cell 1. In this stage consumers are aware that their personal information is being gathered. Next to that they know how to opt-out. In this stage consumers are more satisfied and react more positive towards direct marketing relationships (Milne & Rohm 2000). Research has shown that consumers are willing to exchange private information for benefits. Consumers will give more information to digital marketers if there are perceived long term benefits. Next to that, if consumers are able to control their privacy, consumers are more willing to give up their personal information. (Ariely, 2000).

Table 1: Consumer Privacy States Framework (G. Milne and A. Rohm, 2000)

Consumer is knowledgeable about name removal mechanisms Consumer is not knowledgeable about name removal mechanisms

Consumer is aware of data collection Cell 1: Privacy exists Cell 2: Privacy does not exist

Consumer is not aware of data collection Cell 2: Privacy does not exist Cell 4: Privacy does not exist

( Note: opt-out options in the study of 2008 is used as a similar concept as name removal mechanisms in the study of 2000)

Research has shown that 34% of the population is positioned in cell 1, 74% was aware of data collection and 45% knew how to handle name removal mechanisms. This research has shown that organizations need to educate consumers more intensively about name removal mechanisms (Culnan 1995; Milne 1997). Nowadays this issue is still the case. According to TRUST E marketwire.com (2008) 70% of consumers is aware of data collection and 40% knows about opt-out options.

On the other hand, Wood & Quinn (2003) evaluated the effects on attitudes of forewarnings. If consumers are pre-informed on what is the function of cookies, biased thinking can be encouraged which will generate negative attitudes to its function. However, if people are not provided with information on how to opt-out or opt-in possibilities they are more likely to share their personal information. The cookie-icon could be seen as a pre-warning. This makes consumers see a pre-warning as being warned for something which makes their behavior turn to resistance. This resistance occurs because individuals will feel invaded in their privacy. Next to that consumers do not feel comfortable with others knowing their preferences. Therefore, according to Jacks and Devine (2000), resistance occurs in the form of keeping personal freedom. If resistance occurs, resistance strategies could be applied.

According to Jacks and Cameron (2003) consumers could respond with resistance strategies. These strategies are built as described below. The individual could show resistance by not responding to the customized advertisement message or by leaving the situation as it is. This is called selective exposure. Either the receiving individual could immediately start making counter arguments. In this case counter arguing finds place. On the other side, attitude bolstering implies the individual strengthens its own original view without directly making up counter arguments. Source derogation implies insulting the source or reject the validity of the source. In case of social validation, individuals resist the customized message and bring to mind others who share the same viewpoint. In case of negative effect, individuals get angry because their personal information is utilized without the source indicating what it is used for.

Eventually resistance doesn’t have to appear when getting a pre-warming in the form of an icon. Instead of resistance strategies, individuals could choose to make adjustments to their cookie settings or choose to register to not be followed anymore by signing in an authorized non-registration register. As explained under the heading statistics it could be stated that indeed 40% would take measures if their personal information would be collected (TRUSTe, 2008), therefore resisting strategies play a significant role.

2.3.2 Rank order table: Trust

Next to this framework Earp & Baumer (2003) introduced a rank order of most influential factors affecting consumer behavior regarding their privacy. The table below states that consumers that have high confidence in privacy practices of a website are more willing to provide personal information.

Table 2: Rank ordering of stated influential factors in confidence of privacy practices of web site . Bron: J. Earp and D. Baumer, 2003

Rank of most influential factors Factor

1 Company name

2 Option ‘to opt out’

3 Presence of a privacy policy

4 Presence of a web seal

5 Design of the site

76% of respondents from this study showed that having the ability to opt-out as an important factor for having reliability in the privacy practices of the website. However according to research 87.5% of consumers expect detailed information about privacy policies when visiting websites, while only 54% of this amount is actually reading these privacy policies. 66% of this study showed a rise in reliability if a website provides comprehensive privacy policies.(Earp & Baumer, 2003). Next to that consumers believe websites having a comprehensive privacy policy, will make the website always live up to its policy (Ant??n et al. 2002). This again implies that most internet users prefer assurance of privacy policy but are less apprehensive about what the policy actually says (J. Earp and D. Baumer, 2003). Therefore trust and confidence plays a more important role on providing private information than what the policy actually says.

2.3.3 The consumer profile

The consumer profile is relevant to this particular situation in the sense that the effect of consumers’ perceptions of OBA can be measured. Risk and privacy invasion are major areas of concern among consumers and therefore it could be analyzed to what extent these perceptions will affect their online behavior. By making an analyses, companies could get more focused on what areas to improve in order to not deal with privacy issues in future.

The first factor in the consumer profile that should be analyzed is that security and privacy information should be considered. As described earlier, consumers need to be secured that accurate privacy information is provided to them, however in reality this doesn’t make them read it. Referring back to the rank order table, 66% of website visitors expect proper privacy disclosure but only 54% of the website visitors is actually reading it (Earp & Baumer, 2003). Therefore it could be stated that customers are not focused on explicitly security but only on the idea of security. Therefore the issue that evolves around privacy is more on the security of privacy information but not specifically the content of privacy information. Therefore websites with just being able to demonstrate proper regulations on privacy will have greater chance of creating customers having a more positive perception on online privacy practices. Next tot that according to C. Hoofnagle (2010) internet users rarely read privacy statements. However on the other side, if consumers are better informed on opt-out options there is a possibility this knowledge will create resistance as earlier described (Wood & Quinn, 2003) .

Secondly risk plays an important role in behavior on consumers online. The degree of online sales effectiveness can be raised substantially if the perception of risk is reduced. If customers would read the stipulations it would even be questionable whether they realize the consequences of gathering and analyzing their personal information by cookies (Barocas & Nissenbaum, 2009). Even if anonymized information can be linked to an individual, this individual might think there is a small chance of this happening (Zuiderveen Borgesius, 2011). Therefore again privacy regulations are supposed to just be there to gain security. Risk is sometimes not even considered in its essence but more the perception of risk. Because if web site visitors think there is a small chance of third party’s getting access to information perceived personal, evaluation of risk is seemingly poor.

Third, trust is highly correlated to risk. Increased trust is the consequence of a decrease in perceived risk. This will cause positive beliefs in the business’s online reputation. Fourth, Perceived usefulness. This incorporates the time and effort required for an individual to educate itself on how to opt-out (Perea et al., 2004). Website visitors only have limited knowledge on technology, information and communication technology. Consumers need to understand what is written in privacy statements and what they actually sign an agreement with (Perea et al., 2004). As earlier described, educating web site visitors more by forewarnings can create resistance, which will negatively impact their purchasing behavior (W. Wood & J. Quinn 2003) .

At last the ease of use also has significant impact on consumers their online behavior. Using a new technology need to be free of effort. If an internet user visits a website, he or she experiences this as very time consuming to completely analyze the statement. This makes the website visitor not read it and either state they do not care about their privacy. In statics this is about 3%. On the other side incorporating the law, it cannot be assumed that website visitors not reading the privacy statements willingly accepts the browser settings of cookies. Therefore according to article 2 subsection h Data protection directive which demand for permission a free, specific and on information founded volition will cause considerate problems. (Group privacy protection 29, 2008)

3 What strategies should marketers apply to respond to current privacy concerns regarding cookies in OBA?

3.1 Coercive vs. non- coercive strategies

Organizations that deal with online privacy concerns among consumers should realize whether they are adopting an coercive influence strategy or a non-coercive influence strategy. The coercive influence strategy involves web sites offering incentives to consequently make consumers increase self-disclosure (provide more personal information) (Acquisti & Varian, 2005). Incentives to provide personal information can be categorized into economic incentives such as promotions, discounts and coupons. Non-economic incentives are for instance translated into customization, personalization and access to exclusive content. Threats indicate a penalty or exclusion of benefits for noncompliance. Therefore if the request is not honored, the website visitor cannot make use of the content of the website. For example, NPO, like more websites demand from customers to provide their personal information to get the ability to register on the website and to access specific information on the website. This method of data gathering is aimed at punishing people who refuse to provide their personal information by not providing them with the website content they requested (Sheehan, 2005).

Non-coercive influence strategies. In this case NPO would still take the same actions but without making use of rewards or penalties. For example, a website could explicitly demand the web site visitor by using web forms for these visitors to provide their personal information without the use of non-economic incentives, in this case providing customized advertisement. Instead of providing incentives, NPO could start providing recommendations, such as making the consumer believe, if they provide personal information it can improve their experience on the website (customization) and therefore making the website still reach its original aim. In this case websites can make use of information provision, where they can provide web site visitors with privacy policies which states how and why information will be collected (Milne, Rohm and Bahl, 2004) . Next to that they will provide seals of trust to provide website visitors the guarantee of privacy protection. (Gabbish, 2011).

The main focus for websites such as NPO is identifying strategies for gathering information from website visitors that provide the opportunity to reduce privacy concerns and increase consumers’ trust. According to Payan & McFarland (2005) the application of non-coercive influence strategies have shown positive relational effects. On the other side, coercive strategies have shown the opposite effect. According to Hausman & Johnston (2009) non- coercive strategies have a positive influence on trust while coercive strategies show the opposite. Privacy literature also shows that privacy policies and seals make concerns on privacy decrease and trust to rise. Rewards and threats on the other side makes trust decrease and privacy concerns to increase (Gabbish, 2011).

3.2 Application of the structural model of privacy policy

For companies to reduce the chances of the adoption of resistance strategies from consumers, they could opt for making use of a structural model of privacy policy, privacy concern, trust and willingness to provide personal information. This model showed that if applied properly companies can increase consumer confidence and willingness to provide their personal information (Wu et al., 2012). The model consist of the parameters notice, choice, access, security and enforcement.

Source: Wu et al., 2012

Notice is the most important parameter, stating that consumers should be informed about the collection of personal data before personal data is gathered from these individuals (Wu et al., 2012). In the NPO case, personal data from consumers was collected from consumers without them being aware of it (Pijnenburg, 2014). Choice gives consumers the ability to control the personal data obtained from them. Access gives web site users the ability to have insight into their data. Next to that, website visitors can check whether the data collected from them is correct and complete. Security is concerned with checking whether information is secure and correct.

In order for data integrity to occur, web site owners and third-parties should take measures that provide consumers the ability to have insight into data, erase information and change it to anonymous characters. Enforcement is one of the most important parameters of privacy protection, since privacy can only be assured if there are measures that enforce privacy protection (Wu et al., 2012).

According to Wu et al. (2012) the study came to the conclusion that security ranks highest in concerns of consumers. If the web site owner is aimed at increasing trust among web site visitors, in order for them to provide more personal information, they increase their focus on the provision of security and security data along with creating privacy statements.

This study done by Wu et al. 2012 did research on the relationship of the content of privacy policy to trust and online privacy concern. There are moderating variables that can affect the relationships. These moderating variables tend to describe consumer behavior. Therefore these factors shouldn’t be left out of the original model. The moderating variables that have been researched are cross-cultural effects, age and gender. According to this study, culture has an important moderating effect on the behavior of website visitors to the content of Privacy Policy. Some cultures show a rise in trust in websites when they give consumers access to their data and when their personal data is secure. Differences in cultures have a significant function in the behavior of website users and have influence on their choices in activities online. Gender also influences privacy concerns and willingness to provide personal information. Woman show more openness and therefore more self-disclosure. However they have higher needs for privacy (Wu et., al 2012). Age on the other hand could also have significant impact on the relationship of content of privacy policy and privacy concern/trust. Research showed, the older people get, the more worried they are on their online privacy.

3.3 Web bugs

According to Goldfarb &Tucker (2010) web bugs can be described as 1×1-pixel parts of a code that give online advertisers the ability to follow consumers online. Web bugs are not similar to cookies since they are not visible to the website user and are not saved on the computer of the website visitor. A consumer is therefore not aware of being tracked, unless they analyze the html. code of the webpage. Web bugs track the consumer from website to website. Next to that, web bugs are able to track how far a visitor scrolls down a page. This will have a positive impact on the collection of the preferences of the website visitor (Goldfarb &Tucker, 2010). According to Murray &Cowart (2001) web bugs are used by approximately 95% of top brands. Since consumers are not aware of data collection, privacy concerns will not occur as much as with cookies. However if the law would make websites inform consumers about web bugs, privacy concerns could rise again (Goldfarb &Tucker, 2010). Therefore web bugs could be seen as an alternative for cookies. But if the Privacy Directive adjusts the law, web bugs would become similar to cookies, with the same privacy concerns as consequence.

4 Conclusion/ Recommendation

The reason why this paper focuses on NPO is because in July 2014 they received a penalty by the Dutch authority for consumers and markets known as ACM (acm.nl, 2014). The NPO placed cookies which track the web site visitors without giving accurate information to its visitors. ACM claimed the NPO was not complying with article 11.7A of the Dutch telecommunication, neither complying with the Dutch data protection act. The NPO is only allowed to track consumers if consent of the web-site visitor is given willingly and unambiguously, according to the information that is disclosed (Fouad, 2014). Referring back to section chapter 2 it can be obtained that the NPO didn’t comply with laws referring to article 2.h. In 2011 article 2.h came to attention that with ‘unambiguous’ permission the website is not allowed to make to quick assumptions that the website user gives permission to make use of personal information (European commission, 2003; 2006).

From the models of factors influencing consumer behavior in section 2.3, it can be obtained that the Consumer Privacy States Framework states that according to consumers if the consumer is aware of data collection and the consumer is knowledgeable about opt-out practices, it could be stated that privacy exists, therefore NPO went wrong in not giving consumers the idea that privacy exists.

The rank order table in section 2.3.2 statistics showed that consumers do need assurance from websites that a website have a comprehensive privacy policy. However websites having privacy policies don’t make consumers actually read them (Earp & Baumer, 2003; Ant??n et al. 2002). Therefore consumers not feeling knowledgeable about their rights show resistance. This can be emphasized by figures showing that the cookie wall of NPO is perceived as a pressure. They actually state; if you don’t accept my cookies you can’t visit my website, with the consequence that they lose visitors. Other businesses use a softer approach with the risk of a loss of personal information. This cookie wall has resulted in a loss in turnover of 0-5% in short term. The NPO expects on the long term a rising trend in visitors on their website (Douma & Verspreek, 2014).

Referring back to the Customer profile model in 2.3.3, influencing factors in consumer behavior online show that if consumers feel more secure on how to control their privacy online they will show a more positive perception about OBA. However on the other side, more control would mean more resistance (Wood & Quinn 2003) . Next to that actual risk is not really experienced but the perception of risk.

Therefore NPO should in the future focus on having their privacy statements accurate and clear and create confidence among website visitors. In the end, the consumers are not specifically worried about their privacy and the detailed information in privacy statements but more on their degree of control, what all 3 models confirm.

In order for consumers not to choose to turn to resistance strategies, influence strategies could be applied. Some of these influence strategies could be considered as manipulative. However on the other side, other influence strategies could increase consumers’ perception of security (Kirmani & Campbell, 2004). The effect of influence strategies is not similar to all individual website visitors. Differences may appear in privacy concerns, consumers ‘trust and their willingness to provide personal information (Milne et al., 2009). Research has shown that non-coercive strategies, such as placing privacy policies on a website, decreases concerns on disclosure of personal information. However on the other side, coercive strategies offering a reward would increase privacy concern and decrease self-disclosure willingness (Andrade et al., 2002). Therefore it is recommended to NPO to adopt a non-coercive strategy to increase trust and willingness to provide personal information.

Referring back to the structural model of Wu et al. (2012) the study came to the conclusion that security ranks highest in concerns of consumers. If the web site owner is aimed at increasing trust among web site visitors, in order for them to provide more personal information they increase their focus on the provision of security and security data along with creating privacy statements or building the website. Therefore again, this strategy shows that NPO should increase attention to the parameter trust in order to increase willingness to provide personal information. This strategy highly correlates with the non-coercive strategy. In the coercive strategy NPO would put too much focus on trying to let customers know about the customization provided which would increase resistance and reduce trust. The non-coercive strategy and (the importance of trust in) the structural model both focus on providing security to increase trust and in turn reach a higher willingness to provide personal information.

The alternative of using cookies could be the application of web bugs. However the application of web bugs is only a short term solution until privacy regulations will change. When privacy regulations will change web bugs would become similar to cookies. Therefore it is recommended that NPO as an example organization should not turn to this strategy.

MPPT CONTROLLER UNDER PARTIAL: essay help online free

ABSTRACT: Maximum Power Point

Tracking (MPPT) is the most important part

of an energy conversion system using

photovoltaic arrays. Maximum power point

tracking (MPPT) techniques are used in

photovoltaic (PV) systems to maximize the

PV array output power by tracking

continuously the maximum power point

(MPP) which depends on panel temperature

and on irradiance conditions. The power

voltage characteristic of PV arrays

operating under partial shading conditions

exhibits multiple local maximum power

points (LMPPs). In this paper, a review of

various characteristics curves of MPPT

controller under partial shading conditions

has been presented to analyze the

performance of MPPT controller under

such conditions.

Keywords: Maximum Power Point

Tracking (MPPT), Global Maximum Power

Point (GMPP), Local Maximum Power

Point (LMPP), Multiple Maxima, Partial

Shading, Photovoltaic (PV).

I. INTRODUCTION

A PHOTOVOLTAIC (PV) cell is an

electrical device that converts the energy of

light directly into electricity through PV

effect. PV cells have a complex relationship

between solar irradiation, temperature, and

total resistance, and exhibit a nonlinear

output efficiency characteristic known as

the P’V curve. Therefore, maximum power

point tracking (MPPT) techniques should be

developed in PV systems in order to

maximize the output power of PV systems.

Nowadays, there have been many MPPT

methods reported in the literature, such as

hill climbing, perturb and observe

incremental conductance (INC) and ripple

correction.

However, when there is multiple local

power maxima, from partially shading or

from installation on a curved surface,

conventional MPPT techniques do not

perform well. Multiple maxima may occur

due to bypass diodes, which are used to

avoid hot spots from forming when some

cells in a module or some modules in a

string receive less irradiance than others.

Without the remediation of power

electronics, the lost energy due to partial

shading can be significant. Thus, it is

imperative to utilize MPPT techniques that

reliably track the unique global power

maximum present in shaded arrays.

Some researchers have proposed global

maximum power point tracking (GMPPT)

algorithms to address the partial shading

condition. It is observed that the peaks

follow a specific trend in which the power at

a peak point continues to increase until it

reaches the GMPP, and afterward, it

continuously decreases. The proposed

algorithm incorporates an online current

measurement and periodic interruptions to

address certain challenges associated with

rapidly changing insolation and partial

shading. This method can be an effective

solution to mitigate the effect of partial

shading. The simulation results, however,

obtained by measuring environmental

parameters and the actual case will be

drastically different, because the actual

characteristic of the solar panels depends on

many factors (e.g., light intensity,

temperature,

Fig. 1 PV array under different partial

shading conditions.

ageing, dust, and partial shading). In

addition, the method increases the PV

system cost in practical commercial

applications.

II. PARTIAL SHADING

CONDITIONS

Fig. 1 shows a PV array which has

four PV modules connected in series under

uniform insolation conditions. Fig. 2(a)

illustrates typical I’V and P’V curves for

the PV array under a uniform solar

irradiance of 1000 W/m2 on all the PV

modules. The traditional MPPT algorithm

can reach this peak and continue oscillating

around the MPP. The P&O method, e.g.,

perturbs the solar array voltage in one

direction in each sampling period and tests

the power change afterward. It is assumed

that initially PV array is operating at point

A, as shown in Fig. 2(a).

An operating voltage of the PV array

is perturbed in a given direction (from A to

B), and an increase in output power is

observed (PB > PA). This means that point B

is closer to the MPP than point A, and the

operating voltage must be further perturbed

in the same direction (from B to C). On the

other hand, if the output power of the PV

array decreases (from D to E), the operating

point has moved away from the MPP, and

therefore, the direction of the operating

voltage perturbation must be reversed (from

D to C). Through constant perturbation,

eventually the operating voltage will reach

and continue oscillating around the MPP

level.

However, in some practical

conditions, the series strings of PV modules

are not under the same solar irradiance

condition. The partial shading condition is a

common situation due to the shadows of

buildings, trees, clouds, dirt, etc. Fig. 1

shows several different partial shading

situations. Under the partial shading

condition, if there is one module in a PV

string that is less illuminated, the shaded

module will dissipate some of the power

generated by the rest of the modules. It

means that the current available in a series

connected PV array is limited by the current

of the shaded module. This can be avoided

by using bypass diodes which can be placed

in parallel with the PV module.

The method of using bypass diodes

allows the array current to flow in the

correct direction even if one of the strings is

completely shadowed. Bypass diodes are

widely implemented in commercial solar

panels. Because of bypass diodes, multiple

maxima appear under the partial shading

condition. The P’V curve of PV array in

Fig. 1 possesses multiple maxima under the

partial shading condition, as shown in Fig. 2

(b). The unshaded modules in the sample

PV array are exposed to 1000 W/m2 of

solar insolation and the shaded module is

exposed to 400 W/m2 of solar insolation.

There are two observed peaks in the P’V

curve, because of the natural behavior of the

bypass diode and PV array connection

inside the module. Point A is the GMPP,

while point B the local maximum power

point (LMPP). When the area covered by

the shadow changes, the P’V curve and the

location of GMPP also changes, as shown in

Fig. 2(c) and (d). Under these conditions,

traditional algorithms can only track either

of the two MPPs, and cannot distinguish

between GMPP and LMPP.

Continuing with the P&O method as

an example, both points satisfy the

conditions to be the ‘MPP.’ If the operating

point obtained by the PV array algorithm is

LMPP, the output power is significantly

lower. Some researchers proposed a global

scan method to obtain the PV output curves.

Then a complex algorithm is required to

calculate the GMPP of the curves. This

method is able to obtain the GMPP, but it

cannot determine whether the PV cell is

operating under shading conditions, and

blindly and constantly scans for the MPP,

wasting the output energy. For these

reasons, a new improved MPPT method for

the PV system under the partial shading

condition is proposed in this paper.

Fig. 2 P’V and I’V characteristics curves

of a PV array under different partial

shading conditions

III. ANALYSIS OF

CHARACTERISTIC CURVES

UNDER PARTIAL SHADING

CONDITIONS

In order to avoid blind global scan,

methods to determine the presence of partial

shading are essential. It is noted that when a

series of PV array is under the identical

solar irradiance condition [Fig. 1], every PV

model works as a source, and all modules

are identical in their voltage, current, and

output power at any time. But this state

changes when there is shadow. Fig. 1 is an

example in the following analysis. The

models in the series array are exposed to

two different solar irradiances, and the solar

irradiation levels are 1000 and 400 W/m2,

respectively. The voltages of the modules

that are exposed to different irradiation

levels are completely different.

The two peaks on the P’V curve are

divided into two separate parts, as shown in

Fig. 2(c). Part A is the curve containing the

left peak (curved A’C), and part B is the

curve containing the right peak (curve C’B’

E). In part A, the current of the PV array IPV

is greater than the maximum current that the

PV module can

Fig. 3 Every module output voltage with

array output power.

(a) Unshaded module. (b) Shaded

module.

produce under the shade (M3 and M4);

therefore, the current will flow through the

bypass diode of each module. At this stage,

only PV M1 and M2 are supplying power,

and PV M3 and M4 have been bypassed by

the diodes. The characteristic curves of the

PV module voltage with output power are

shown in Fig. 3(a) and (b). The voltages of

PV M3 and M4 are approximately negative

0.7V (the diode’s forward voltage drop) in

part A, as shown in Fig. 3(b).

Therefore, the module voltages

being equal to the negative of the diode’s

forward voltage can be used as one effective

way to estimate partial shading condition. In

part B, all PV modules are supplying power,

but the unshaded and shaded modules are in

different working conditions. Because the

PV modules receive different amounts of

solar radiation, the voltages of the PV

modules are different. In part B (curve C’

B’E), the voltage of the unshaded modules

is greater than that of the shaded modules,

as shown in Fig. 4. It is evident that this is

another indicator to efficiently identify

partial shading. Following the above

analysis, some of the observations are listed

as follows.

1) I’V curves under partial shading

conditions have multiple steps, while the

P-V curves are characterized by multiple

peaks.

2) The number of peaks is equal to the

number of different insolation levels

irradiated on the PV array, and any peak

point may be the GMPP.

Fig 4 Array output power with unshaded

module output voltage and shaded

module output voltage.

3) The voltages of PV modules that receive

different solar radiations are different.

4) The voltage of the PV module that is

bypassed by a diode is equal to the negative

of the diode’s forward voltage drop.

CONCLUSION

In this paper, a review of concepts &

developments in the field of MPPT has been

shown. Also various partial shading

conditions have been briefly reviewed. The

comparison between this various conditions

of partial shading has been summarized with

the help of various characteristic curves.

Finally it is concluded that conventional

MPPT techniques have disadvantages like

energy loss, not able to determine partial

shading conditions, etc. Majority of these

problems can be eliminated by improved

MPPT controller method. Therefore

application of Improved MPPT controller

method now a day’s not limited up to

generation level but research work

suggested that it is having ability to replace

the conventional MPPT methods too in near

future.

REFERENCES

[1] W. Xiao and W. G. Dunford, ‘A

modified adaptive hill climbing MPPT

method for photovoltaic power systems,’ in

Proc. Power Electron. Spec. Conf.

(PESC’04), vol. 3, Jun. 2004, pp. 1957

1963.

[2] N. Femia, G. Petrone, G. Spagnuolo, and

M. Vitelli, ‘Optimization of perturb and

observe maximum power point tracking

method,’ IEEE Trans. Power Electron., vol.

20, no. 4, pp. 963’973, Jul. 2005.

[3] F. Liu, S. Duan, and F. Liu, ‘A variable

step size INC MPPT method for PV

systems,’ IEEE Trans. Ind. Electron., vol.

55, no. 7, pp. 2622’2628, Jul. 2008.

[4] J. W. Kimball and P. T. Krein,

‘Discrete-time ripple correlation control for

maximum power point tracking,’ IEEE

Trans. Power Electron., vol. 23, no. 5, pp.

2353’2362, Sep. 2008.

[5] H. Patel and V. Agarwal, ‘MATLAB

based modeling to study the effects of

partial shading on PV array characteristics,’

IEEE Trans. Energy Convers., vol. 23, no.

1, pp. 302’310, Mar. 2008.

[6] N. Thakkar, D. Cormode, V. P. A. Lonij,

S. Pulver, and A. D. Cronin, ‘A simple non

linear model for the effect of partial shade

on PV systems,’ in Proc. IEEE Photovoltaic

Spec. Conf. (PVSC), 2010, pp. 2321’2326.

[7] Yang Chen, Keyue Ma Smedley, ‘A

Cost-Effective Single-Stage Inverter With

Maximum Power Point Tracking’, IEEE

Transactions Power Electronics, Vol. 19,

No. 5, pp. 1289-1294, Sep. 2004.

[8] Eduardo Rom??n, Ricardo Alonso, Pedro

Iba??ez, Sabino Elorduizapatarietxe &

Dami??n Goitia, ‘Intelligent PV Module for

Grid-Connected PV Systems’, IEEE

Transactions Industrial Electronics, Vol. 53,

No. 4, pp. 1066-1073, Aug. 2006.

[9] Hiren Patel, Vivek Agarwal, ‘Maximum

Power Point Tracking Scheme for PV

Systems Operating Under Partially Shaded

Conditions’, IEEE Transactions Industrial

Electronics, Vol. 55, No. 4, pp. 1689-1698,

April 2008.

[10] Hiren Patel, Vivek Agarwal,

‘MATLAB-Based Modeling to Study the

Effects of Partial Shading on PV Array

Characteristics’, IEEE Transactions Energy

Conversion, Vol. 23, No. 1, pp. 302-310,

Mar. 2008.

[11] Jonathan W. Kimball, Philip T. Krein,

‘Discrete-Time Ripple Correlation Control

for Maximum Power Point Tracking’, IEEE

Transactions Power Electronics, Vol. 23,

No. 5, pp. 2353-2362, Sep. 2008.

[12] Jung-Min Kwon, Bong-Hwan Kwon,

Kwang-Hee Nam, ‘Grid-Connected

Photovoltaic Multistring PCS with PV

Current Variation Reduction Control’, IEEE

Transactions Industrial Electronics, Vol. 56,

No. 11, pp.4381-4388, Nov. 2009.

Learning theories – behavioural, social & cultural, constructivism, cognitive

Learning is defined as the permanent change in individuals mind, voluntary or involuntary. It occurs through an experience that can bring about a relatively permanent change in an individual’s knowledge or behavior. Behaviorist defines learning as the changes in an individual’s mind resulting in a permanent change. It is learning that takes place intentional or unwillingly in individuals. Cognitive psychologist defines learning as the changes in knowledge that can be an internal mental activity that cannot be observed directly. Learning involves obtaining and modifying knowledge, skills, strategies, beliefs, attitudes and behaviors to understand old or new information. Individuals learn skills from experiences that tend to take the form of social interactions, linguistic or motor skills. Educational professionals define learning as an ‘enduring change in behavior or in the capacity to behave in a given fashion which results from practice or other forms of experience’.

One may ask how does learning happen? Learning happens every day to every individual, it doesn’t only happen in the classrooms, colleges or universities buildings but it can happen anywhere and every day. Learning can occur through interacting with others, observing or simply as just listening to a conversation. Learning happens through experiences good and bad, or ones that can provoke an emotional response or simply offer a moment of revelation. Behaviorist and cognitive theorist believed that learning can be affected by the environment an individual resides but behaviorist focused more on the role of the environment and how the stimuli is presented and arrange and the responses reinforced. Cognitive theorist on the other hand agrees with behaviorist but tend to focus more on the learners abilities, beliefs, values and attitudes. They believe that learning occurs by consolidation which is the forming and strengthening of neural connections which include the factors organization, rehearsal, elaboration and emotional. Learning occurs in many ways, psychologist believe that learning is the key concept of living whether it’s intentional or unintentional which is why they came up with the learning theories.

Learning theories are considered theoretical frameworks in describing how information is contain, refined and maintain during learning. Learning is an important activity in the lives of individuals; it is the core of our educational process, even though learning begins out of the classroom. For many years psychologist sought to understand what is learning, the nature of it, how is it transpired and how individuals influence learning in others through teaching and similar endeavors. Learning theories tend to be based on scientific evidence and more valid than personal opinions or experiences. There are five basic types of theories used in educational psychology which are: Behavioral, Cognitive, Social & Cultural, and Constructivism.

Behavioral Theory

The behavioral approach is the behavior view that generally assumes that the outcome of learning is the change in behavior and emphasizes the effects of internal events on an individual. In the behaviorist approach, they believed that individuals have no free will, and that the environment an individual is place in determines their behavior. They believe that individuals are born with a clean slate and that behaviors can be learned from the environment. The learning theories from the behaviorists Pavlov, Guthrie and Thorndike have historical importance on learning. Although they may differ each theory has its own process of forming associations between stimuli and responses. Thorndike believed that responses to stimuli are strengthening when it is followed by a satisfying consequence. Guthrie reasoned that the relation between stimulus and responses is established through pairing. Pavlov, who developed the classical conditioning, demonstrated how stimuli can be conditioned to obtain certain responses while being paired with another stimulus. The behavior theory is expressed in conditioning theories that explains learning in the terms of environmental events but is not the only conditioning theory.

B. F. Skinner developed the Operant conditioning; this form of conditioning is based on the assumptions that the features of the environment serves as cues for responding. He believed that we learn to behave in certain ways as we operate on the environment. In operant conditioning reinforcement strengthens the responses and increases the likelihood of the occurring when the stimuli are present. The operant conditioning is a three-term contingency that involves the antecedent (stimulus), the behavior (response) and the consequences. Operant conditioning involves consequences which can determine how individuals respond to environmental cues. Consequences can be either good or bad for individuals, it can reinforce behavior that increases it or a reinforcement that decreases behavior. There are other operant conditioners such as generalization, discrimination, primary and secondary reinforcements, reinforcement schedules and the premack principle.

Shaping is another form of operant conditioning, it is the process used to alter behavior in individuals. Shaping is the successive approximations which involves the reinforcing progress. It is the complex behaviors that are formed by the linking of simple behaviors in the three-term contingencies. This operant conditioning involves self-regulation which is the process of obtaining an individual stimulus and reinforcement control of themselves.

Cognitive Theory

The cognitive theory focuses on the inner activities of the mind. The cognitive theory states that knowledge is learned and the changes in knowledge make the changes in behavior possible. Both the behavioral and cognitive theory believe that reinforcement is important in learning but for different reasons. The behaviorist suggests that reinforcement strengthens responses but cognitive suggest that reinforcement is a source of feedback about what is likely to happen if behaviors are repeated or changed. The cognitive approach suggests an important element in the learning process is the knowledge an individual has towards a situation. Cognitive theorist believe that they information we already know determines what we will perceive, learn, remember and forget.

There are three main theorist of the cognitive development Gestalt, Kohler and Koffka. Gestalt learning theory approach proposes that learning consists of grasping of a structural whole and not just a mechanistic response to a stimulus. The main concept of his theory was that when we process sensory stimuli we are aware of the configuration or the overall pattern which is the whole. Kohler theory stated that learning can occur by a ‘sudden comprehension’ as to gradually understanding. This theory could happen without any reinforcement and there will be no need for review, training or investigations. Koffka theory suggested that he supported the fact that animals are can be participants in learning because they are similar to humans in many ways. He believed that there was no such thing as meaningless learning, and that the idea interdependent of facts was more important than knowing many individual facts.

Social & Cultural theory

The social and cultural theory is based on how individuals functioning are related to cultural, institutional and historical context. Vygotsky was a psychologist in Russia who identified the Social & Cultural theory also known as sociocultural theory. The Sociocultural theory is known as the combining theory in psychology because it discussed the important contributions society makes on an individual development and cognitive views of Piaget. The theory suggested that learning occurs between the interactions of people. Lev. Vygotsky believed that Parents, Caregivers, Peers and culture played an important in the development of a high order function. According to Vygotsky ‘Every function is the children cultural development that appears twice: firstly on the social level, secondly on an individual level. In the social cultural theory tends to focus not only on how adults or peers influence learning but how an individual culture can impact how learning takes place.

According to Vygotsky children are born with the basic constraints on their mind. He believed that each culture provides ‘tools of intellectual adaptation’ for each individual. Theses adaptation allows children to use their basic mental ability to adapt to their culture for example a culture may utilized tools to emphasize on memorization strategies. Vygotsky was a brilliant man, he worked along with Piaget in developing the cognitive theory their theories differ in certain ways. Firstly Piaget theory was basically based on how children interactions and explorations influenced development, Vygotsky placed greater emphasis on the social factors that influence development. Another difference is the Vygotsky suggested that cognitive development can be different between cultures while Piaget theory suggested the development in universal. There is one important concept in the sociocultural theory known as the zone of proximal. The Zone of proximal is considered to be the level of independent problem solving and a level of potential development, through problem solving under the guidance of an adult or with peers. It includes the skills that a person cannot understand or perform on their own yet, but is capable of learning with guidance.

Constructivism Theory

The constructivism learning theory is defined as how learners or individuals construct knowledge from pervious experiences. Constructivism is often associated with a pedagogic approach that often promote learning or learning by doing. Constructing is known as the meaning for learning because constructivism focuses on the individual thinking about learning. The constructivist theory argues that individuals can generate knowledge from interactions between experiences and ideas. Constructivism examined the interactions between individuals from infancy to adulthood to try to comprehend how learning is done from experiences and behavior patterns. The constructivist theory is attributed to Jean Piaget who articulated the mechanisms by stating that knowledge is internalized by learners. Piaget stated that through the processes of adaptation the accommodation and assimilation, individuals can construct new knowledge from past experiences.

According Piaget theory of constructivism accommodation is the process of an individual reframing one’s mental view of the world and tries to fit in new experiences. Accommodation can be understood when failure leads to learning, as humans if we have an idea that the world works only one way and that way fails us then we will fail. In accommodation we learn from our failure or the failures of others. The constructivism theory describes how learning happens whether the individuals learn from using their experiences to understand information or by just following instructions to construct something. In both cases constructivism suggest that learner construct knowledge from experiences. The constructivism theory tends to be associated with active learning because5 individuals learn from experiences, something that was already did. Several cognitive psychologists argued that constructivist theories are misleading or can contradict findings.

As an educator I can facilitate learning by encouraging my students, helping them to develop to their fullest potential. As an educator I am compelled to vie and asses learning styles so that I can meet every student needs within the classroom. As an educator I want to be able to allow students to learn gradually. I would want my students to thrive academically and socially in and out of the classroom. From my understanding the four learning theories discussed in the paper all contribute to my understanding of learning. Despite all the different theories each theory gave me a new insight on learning occurs in and out of a class, college or university. From Behaviorist perspective view of learning is the change in behavior and emphasis of external events on an individual. For example Pavlov experiment in classical conditioning, where he taught dogs to salivate when they hear the tuning of a fork. If we used both conditioning theories with the classrooms can train students to behave and operant in the way they would want them to.

The theory that can be used in Music is the Behaviorist theory, I say this because music is the incorporating of knowledge and feeling. Music sets the atmosphere for an environment for example if a relaxing song is being played at home, that song puts the individual in a relaxing mood , in the behaviorist theory the environment influences the response of an individual so the relaxing song will evoke a relaxed response as done in Pavlov experiment of classical conditioning with the dogs that provoke salivating when hearing the tuning of a fork. In music classical conditioning is where students can be conditioned to like or enjoy a piece of music. For example if a classical song is being played that the students don’t know or like the teacher can play it repeatedly so they can get an understanding of it and eventually the students will enjoy the music because of the repetition of the song being played. There response to the song might be in the way of moving their bodies, tapping their feet or nodding their head.

Policies And Enactments In Malaysia For Sustainability

As part of the government’s effort to promote a safe and sustainable environment for the future generations, the Government of Malaysia has took legal actions and institutional arrangements such that environmental factors are included as part of a project planning requirements for any future developments. This new institutional legislations requires the establishment of business/industry in the country to prioritize environmental requirements and strict assessments after a business or industry has been registered and in operation.

Before any business can legally operate in any part of the state in Malaysia, businesses and corporates are required to follow strictly a list of compliances with some form of licensing whether it is an industry/sector specific licence or activity specific licence, which would depend on the businesses that is in operation. These licences varies from registrations, approvals or permits. The legislations that had been enforced are the pillars of foundations to support the varieties of licences that maintain a balanced in social developments and environmental conservations.

Since it was made a National Policy in 2002 ,a principle was set to maintain a harmonious economic development goals by interpreting a stewardship of the environment, conserving the natures diversity, putting continuous actions in improving the quality of the environment, a sustainable use of existing nature resources reserves, implying decision-making actions wherever possible, responsibilities of the private sector, continuous commitment and accountability and finally active participation from the international community.

Environmental Quality Act (EQA) was enacted in 1974 specifically to restrict the illegal disposal of wastes into the environment. This legislation as to this day has a total of 38 sets of regulations and orders as per Appendix A, all relating to the prevention, control of pollutions and enhancement in the quality of environment in Malaysia. There are several requirements that involves the Director General of Environmental Quality as to project implementation. First is the Environmental Impact Assessments Reports which is mention under section 34A of the EQA.Then,there is the site suitability evaluations, a written permission that allows for the construction of  businesses that requires waste treatment and proper waste disposal( Crude oil mills, dyeing factories ) under section 19 of the EQA, a written approval for the instalments of waste disposal units such as, incinerators, burners or chimneys under Environmental Quality Regulations(1978) and EQA (1974) and finally, the licences to operate a businesses under section 18 of the EQA.

The Environment Quality Act (EQA) 1974 of Discharge Quality Standards specifies standards for discharge of waste upstream and downstream to natural sources such as rivers. The effluent quality (Treated Industrial Waste Water) must abide by a minimum requirements set by the Environmental Quality Act 1974 and the limits, set down by Environmental Quality Regulations (Sewage Industrial Effluent Regulations,1979).In general, there are a few types of effluents namely,

1. General manufacturing effluents

2. Specific effluents

3. General Service effluents

4. Intermittent effluence

Therefore, the design of Industrial effluences must take considerations of the many factors such as production capacities, daily volume of effluents and the composition of make-up water used by the plant process etc. A legislation was implemented to control the effluent quality which is the Industrial Effluent Treatment Systems (IETS). In section 4 of the Industrial Effluent Regulations in 2009 (IE2009), a business premises needs to notify the Department of Environment (DOE) for any new sources of Industrial Effluent, increase in effluent quality resulting from  an increase in production quantity and upgrading existing IETS resulting from worsening effluent quality.

Environmental Impact Assessment (EIA) Order 1987 is a study to generate report under the approval of the Director General of Environment prior to giving approval by any federal or state government authority. The approving authority would then come up with the decision whether the study is worth proceeding. The members of the EIA study should be competent individuals that have a background and legally registered with the Department of Environment (DOE).The list of suitable candidates range from subject consultants to assistant consultants etc. If a report was filled by individual without registration from the DOE, then the EIA report would be invalid and will be automatically rejected by the DOE. In Malaysia, there are two procedures that have been adapted which is the Preliminary EIA and the Detailed EIA.

A Preliminary EIA report is then reviewed by government agencies such as the Department of Environment state office and other related government agencies. The proper format and procedures for a Preliminary EIA is shown in Appendix F1 of the Environmental Quality (Environment Impact Assessment) Order of 1987.The Detailed EIA required the project leader to submit Terms Of reference (TOR) to the Department of Environmental (DOE) for review and approval. The proper format and procedures to form a Detailed EIA are listed in Appendix C of the Environmental Quality (Environmental Impact Assessment) Order of 1987.For industrial project however, EIA functions to assist scouting for site locations as well as taking necessary environmental control and mitigation approach. The EIA would predict the environmental impacts and recommending project plans till eventually protecting the environment at a cost where it benefits both the project and the surrounding communities.

In 1993, an enactment for sewerage service, namely the Sewerage Services Act of 1993 was enacted by the Duli Yang Maha Mulia Seri Paduka Baginda Yang di-Pertuan Agong with the advice and consent of the Dewan Negara and Dewan Rakyat in Parliament assembled. The enactment which was applied throughout the country of Malaysia. The content of the enactment touches on the authority of the state government and the federal government in various areas where both parties comes to an agreement to manage the sewage services throughout the state together. Part 3 of the act implies on the general role of a Director General and their deputies and the power entrusted onto them whether it is to prescribe standards regarding the sewerage systems, to secure functions and obligations of every service contractor  whom agreed to an agreement under section 7 are properly carried out throughout Malaysia or other responsibilities that may concerns them.

Part 8 of the Sewerage Service Act 1993 touches on the approval that is required by any businesses premises from the Director General if a sewerage system or a septic tank is to be constructed. A plan is to be submitted along with specifications of the premises under a written law of the local authority or other regulatory bodies that it may concerns before an approval for the construction or the erection of the building is to be granted by approving authority who would then submit the documents to the Director General. The Director General may reject any plan and specifications  if it is not in compliance with the Sewerage Service Act 1993.The person whom submitted the plans may re-submit his/her plans within a period as specified by the Director General. If however the amended plans were fail to submit it the period specified by the Director General, the plan would be deemed withdrawn and shall not be re-considered without prejudice to the individual’s rights to submit a new/revamped plan.

Other than the policies and the enactments mentioned above, there is also the National Water Services (SPAN) that was introduced to regulate and distribute the water services and sewerage treatments into an effective and through transparent implementation of the Water Services Act 2006 (655) towards a sustainable and efficient water provider to all industries.There is a private owned company, Indah Water which is responsible to develop an efficient sewerage system for the Peninsular of Malaysia.

Analysing article on “hotel guests’ towel reuse behavior”

1. Because of saving energy and reducing detergent use, it makes sense to study ways in which the effectiveness of messages in addition to towel reuse may be improved, because they may contribute to a cleaner environment. Secondly, very little research has been done addressing either descriptive norm effects on towel reuse behavior or the more specific effects of provincial versus general norms. Earlier studies are inconsistent with regard to the relative effects of provincial versus general descriptive norms. They did not show overall benefit from those descriptive norm conditions compared to a standard environmental message. Because of the wide impact of the Goldstein paper and because the inconsistencies across studies using similar designs, these studies conducted by Bohner and Schlüter (2014) are highly relevant.

2. The independent variable is the message conditions. A distinction is made in a standard environmental message: “Help to save the environment. Every day we clean a great number of towels, many of them are unused. Please help us to protect the environment. You can join us in this program to help us to protect the environment by reusing your towel during your stay.”, and descriptive norm messages: “Join your fellow guests in helping t save the environment. In a study currently conducted [conducted in the fall of 2009], 75% of the guests [guests who stayed in this room (xxx)] participated in our new resource savings program by using their towel more than once. You can join your fellow guests in this program to help save the environment by reusing your towel during your stay.” (Bohner & Schlüter, 2014).  The text in italics represents the two levels of the general versus provincial norm manipulation. These provide four different versions of the descriptive norm message. Also, ‘xxx’ in the provincial norm conditions, was replaced with the actual room number. Finally, each message version ended with the exact instructions on how to participate or not to participate. These were the same in each message. The dependent variable is the towel reuse. The measurement of this dependent variable is nominal.

3. After Study , they didn’t know how guests would have behaved if there was no message at all, urging them to reuse towels. Therefore, the design of Study 2 was very similar with Study 1, with the addition of a one-week, no-message baseline observation period that preceded the experimental condition. In order to test whether the standard and normative messages would increase towel reuse rates compared to a no-intervention baseline, the study was repeated in a hotel that had no towel-reuse program going on.

4. The results of both studies show that the reuse rates are high overall, and that both standard and descriptive norm messages increase the reuse rates in comparison with a no-message baseline. However, both studies show that the standard environmental message was highly effective and even more effective than the descriptive norm messages. The results of Study 2 prove that there is a nonsignificant trend toward more effectiveness of the provincial norm than the general norm. Therefore, Study 2 replicated an important finding of Goldstein et al. This, however, differs from the results of their own Study 1. Also, the effects of proximity are inconsistent across other studies. There are also cultural and conceptual issue, comparing the present findings with previous findings.

5. Because the cultural background has not been used to influence towel reuse in Germany before, the estimates of a separate group of pilot participants will be collected, to determine if presenting a descriptive norm of 75% would appear credible and effective. A sample of adults will be recruited in the area where the hotels of Study 1 and 2 were located. The pilot participants have to indicate their original ethnicity. This is called purposive sampling, because on forehand we assume that the participants have some kind of cultural background. They can choose from German, Dutch, Belgian, French, Polish, Swiss, Australian, Danish or other. With this way of pilot testing, the towel reuse behavior can be captured, taking into account the culture background of the people studied.

6. Because behavioral traced were to be recorded anonymously and unobtrusively, the ethics committee waived the need for written informed consent from the participants. This enhance the extent with the ethical guidelines that are followed. Second, staff members who kept track of the towel reuse, were aware of the different messages being used, but they were unaware of any hypothesis. Also, the authors have declared that no competing interest exist. This also enhance the extent. Because as described in ‘The Netherlands Code of Conduct for Scientific Practice’, “Scientific activities are performed scrupulously, unaffected by mounting pressure to achieve.” (Association of Universities in the Netherlands, 2012). Third, the presented information is verifiable. The authors made clear what the data and the conclusions are based on, where they were derived from and how they can be verified. This again, enhance the extent with the ethical guidelines. Finally, the article was capable of being verified and therefore is peer reviewed. This also enhance the extent.

7. The housekeeping staff was thoroughly instructed how to record reuse rates. To keep procedures as simple as possible, staff members kept track of towels placed on the towel rack on their usual worksheets. There is high observer reliability, because the observers are trained, and there were clear definitions and classifications of behavior. Second, there were different studies underlying to this study, therefore there is established measures reliability. There is a comparison of the results obtained from the measure they were developing with the results obtained from a known, tested measure that has been designed for the same purpose. This also enhance the reliability.

8. It’s valid because they did a study after study 1, this enhance validity. On the other hand, it’s not valid, because in Study 2 the hotel was a 3-star hotel instead of a 4-star hotel, and Study 2 had less participants than Study 1. This lowers the validity and this refers to criterion validity. We can also speak of high concurrent validity, because Study 1 and 2 both showed that the reuse rates are high overall. At least, there is predictive validity, they predict real-world outcomes, because in Study 2 they also tested how guests would behave if there was no message at all, urging them to reuse towels.

9. I think the article is of a moderate quality. Like the authors of the article itself said, they do not take the ethnicity in account. This way there is a chance that bias occurs, which leads to population stratification. Despite the fact that there were highly similar procedures, the two studies yielded partly different findings, in comparison with the results which were obtained in an U.S. hotel. Also, the reuse rates in current studies were much higher in comparison to the studies obtained in the U.S. Above all, even the no-message baseline in Study 2 was higher than the reuse rates in the different message conditions by Goldstein et al. (2008). Therefore, this may reflect a general difference in environmental attitudes and behaviors between countries. Finally, provincial versus general norm effects were inconsistent across studies, and the alternative manipulation of temporal proximity showed no clear results.

Reference list

Bohner, G., & Schlüter, L.E. (2014). A room with a viewpoint revisited: Descriptive norms and hotel guests’ towel reuse behavior. PLOS One, 9: e104086. doi: 10.1371/journal.pone.0104086
Goldstein NJ, Cialdini RB, Griskevicius V (2008) A room with a viewpoint: Using social norms to motivate environmental conservation in hotels. Journal of Consumer Research 35: 472–482. doi:10.1086/586910
VSNU. (2005). The Netherlands code of conduct for scientific practice: Principles of good scientific teaching and research. Available at: http://www.vsnu.nl/files/documenten/Domeinen/Onderzoek/The_Netherlands_Code_of_Conduct_for_Scientific_Practice_2012.pdf

Effects of chemical fertilizers: custom essay help

In order for plants to grow and thrive, they need certain different chemical elements. The most important ones are carbon, hydrogen and oxygen which are available from air and water and therefore there is plenty of these elements easily available to plants. Plants also need three main macronutrients in large quantities which they cannot produce themselves and are: nitrogen, phosphorus and potassium (also known as potash). These elements are important because they are the necessary building blocks and without them a plant simply would not be able to grow because it cannot make the pieces that it needs on its own. If any of the macronutrients are missing or difficult to obtain from the soil, this will limit the plant’s ability to grow.

Due to this issue, many people choose to supply the elements that the plants need through fertilizers. There are both organic and synthetic fertilizers that can be added to soil or land in order to increase its fertility. Organic fertilizers are made from natural living materials such as peat moss, bone, seaweed, composted plant materials and animal manure. They are made from ingredients that are naturally high in nitrogen, phosphorus, potassium, or all of the elements. Synthetic fertilizers need to be manufactured chemically or produced from rocks and minerals.

Most farmers and gardeners choose to use a combination of both chemical and natural fertilizers. They have access to cow manure and horse manure which is a great fertilizer as long as it is mixed with compost and allowed to age. While the manure breaks down it is able to add nutrients that the plants need as well as enrich the soil in order to help it retain moisture and reduce runoff. However, organic fertilizers don’t break down quickly. It can take months to release nutrients and it takes longer, but they tend to improve soil structure and provide benefits that last for multiple growing seasons. On the other hand, synthetic fertilizers can dissolve in water and can be used by plants immediately, and although they can help plants get off to a quick start, they do not usually do much to improve overall soil health in the long run.

However, many despite the fact that many people use fertilizers, few consider the drawbacks of these fertilizers. This is because most people are not aware of the fact that fertilizer can harm plants and other living things in the environment as well as the ecosystem in general. Fertilizers can hurt the plants themselves if they provide the elements in the wrong ratios. While fertilizers may be beneficial to plants, they are not always as healthy for the rest of the environment. In order to be as environmentally conscious as possible, we need to ask ourselves: what are the effects of synthetic inorganic fertilizers on both terrestrial and aquatic ecosystems?

Phosphorus can run off and degrade waterways

There are organic fertilizer such as cow manure or homemade compost which are excellent all-purpose organic fertilizers.

How does synthetic fertilizer differ from organic fertilizer, what are the negative effects of synthetic fertilizer, what do we need to change/ how can we educate people

Figures: Include figures and additional summarizing information that describes how the issue is portrayed in popular press (e.g., show headlines, Youtube clips, etc.)

Identify or produce science that supports/refutes the issue/topic/gap in knowledge: What is the science behind the issue? What science is missing in how the issue is portrayed? What science needs to be done to better support/refute the issue?

Most nitrogen fertilizers are obtained from synthetic ammonia (NH3) and it is used either in a water solution or as a gas. It is sometimes also converted into salts like ammonium nitrate, ammonium sulfate, and ammonium phosphate.

Phosphorus fertilizers derive calcium phosphate from rocks or bones or treating calcium phosphate with sulfuric and phosphoric acid.

Potassium fertilizers are mined from potash deposits

Many fertilizers have been known to cause a loss of oxygen in aquatic systems because of the runoff. High amounts of nitrogen end up in the bodies of water, leading to an excess of algae which can have negative effects on the wildlife found in the water. Furthermore, some fertilizers include toxic heavy metals that can be hazardous if the runoff ends up in the water because it could contaminate the environment or harm the animals found in it.

The biggest issue is groundwater contamination because since nitrogen fertilizers break down into nitrates and travel easily through the soil, they can remain in the water for many years. This addition of excess nitrogen which accumulates throughout the years can have a negative effect. Some fertilizers produce ammonia emanation and contributes to acid rain, groundwater contamination and ozone depletion because it releases nitrous oxide through the denitrification process.

Nitrogen groundwater contamination also leads to what is known as marine “dead zones”. Due to the fact that there is an increase in plant life thanks to the nitrates, there is a decrease in oxygen which in turn starves out fish and crustaceans in the environment. This does not only affect the aquatic ecosystem and food chain, but also the local societies that depend on food from these areas.

Furthermore, the use of synthetic nitrogen creates a negative effect because it leads to a decline in soil’s ability to store organic nitrogen. This results in the organic matter leaching away, contaminating water in the form of nitrates, and entering the atmosphere as nitrous oxide (N2O) which is an extremely powerful greenhouse gas. The loss of organic matter leads to injured soil, which is then more prone to compaction. This makes the soil more vulnerable to being affected by runoff and erosion, which limits the growth of stabilizing plant roots.

Fertilizer also destroys soil biodiversity because it diminishes the role of nitrogen-fixing bacteria and increases the role of everything that feeds on nitrogen. By increasing these feeders, it speeds up the decomposition of organic matter which results in a change in the soil’s physical structure. In this way, there is less pore space and the soil becomes less efficient at storing water and air. This means that more irrigations is now needed, and water drains away nutrients. Also, less oxygen means that the growth of soil microbiology is slowed down.

A study published in the Journal of Environmental Quality talked about the long term use of synthetic fertilizers and their negative impact on soil structure. Researchers from Kansas State University observed inorganic fertilizer’s effects on soil properties for the past 50 years. They found that although adding nitrogen and phosphorus increases soil organic carbon, the benefits are outweighed by the costs, such as decreased soil macro aggregates. This leads to a soil that is less resistant to erosion and water is not able to move well through the soil. They believed that the decreased soil aggregates could be due to the ammonium ions that are found in the synthetic fertilizer. These ions cause the soil particles to separate.

Get Real:  Based on scientific data, how do you think this issue should be portrayed to the public? Do you think current coverage is accurate? Do you think more/less media attention should be paid to the problem?

There is a lot of information available on the topic, and I quickly received many good sources of information for my research. However, I believe that although the research, the studies, and the proof is out there, people are uneducated about the topic. I think this is because many people don’t stop and wonder about the negative effects that fertilizer could have because of the common misconception that it only does good for the environment. Many people only think that fertilizer will help their plants and crops grow and that it is necessary so they do not question it and simply use it. Another problem might be that despite the fact that people might now about the detriments, they continue to use it because they want their lawns and gardens to look beautiful or they want to continue successfully growing their crops.

I also believe that the effects of chemical fertilizers are not widely spoken about because they are largely untested. Although there is plenty of information about the risk of groundwater contamination and the environmental issues that fertilizers entail, there is still research missing on how it affects human health. Therefore, instead of trying to use organic fertilizers, people continue using synthetic.

We should be aware that fertilizers not only affect the bodies of water or land or the animals that live there, but humans as well as the future of our entire world because it becomes a ripple effect.

However, knowing and understanding the effects of chemical fertilizers doesn’t do much help if we do not do anything about it. We should support organic and sustainable agriculture and become more informed about what we can do or change to help our environment.

Energy production from MFCs

Introduction

Two of the most pressing environmental and economical issues in today’s society are disposal of waste and using creating energy in a way that is sustainable for our future. Of the forms of waste in which we must dispose, one of the most precarious forms comes from wastewater treatment. Wastewater sludge contains varied amounts of organic chemicals, toxic metals, chemical irritants, and pathogens which, if disposed of improperly, can become a major threat to the environment.

Wastewater sludge is commonly used as fertilizer for agriculture and silviculture, but especially with a growing population, this fertilizer use is not a sustainable way to dispose of the necessary amount of sludge. Additionally, fossil fuels are our planet’s main source of energy, and are nonrenewable resources that are especially poor forms of energy for the environment. With environmental impacts such as climate change resulting from these fossil fuels as well as a depletion in available fossil fuels, there is currently a major societal push towards finding a source of energy which is both environmentally friendly and economically sound. Electricity generation using microbial fuel cells (MFCs) may be the solution to these conflicting issues.

MFCs work by using bonds in organic compounds to create electrical energy through the catalytic reactions created by microorganisms under anaerobic reactions. Along with creating energy, these MFCs can be used in wastewater treatment systems to break down organic matter. However the effectiveness of these cells is limited due to low density in power potential as well as a high maintenance cost (Du et. al, 2007). This report will break down factors affecting energy production from MFCs, analyze their environmental and economic costs and benefits, and ultimately predict the feasibility of using MFCs both today, and in the future as technology progresses.

Background

Electrical generation in a MFC begins with introducing substrate into an anaerobic chamber that will act as the anode of the fuel cell. Microbes present in this anaerobic chamber will oxidize the substrate, thus releasing proton, electrons, and carbon dioxide (Du et. al, 2007). The cathode of the fuel cell is separated from the anode by a Proton Exchange Membrane, allowing the protons to freely cross into the cathode region (Du et. al, 2007). This separation of the excess protons and electrons creates a charge imbalance in the fuel cell. The excess electrons in the cathode region give it a negative charge, and the extra protons charge the anode positively. An external circuit connects the anode and the cathode, giving the electrons a path to balance the charge. When the electrons pass through the resistance of the circuit a current is generated, creating power that can then be stored or used (Du et. al, 2007).

This process is only possible in the absence of oxygen. Requiring that the anode be strictly anaerobic (Du et. al, 2007). It can be difficult to remove the oxygen from the influent, but the return on this work far exceeds the cost. Aerobically converting the biomass yields carbon dioxide and water, which are difficult to extract energy from (Pham et. al, 2006). When this process is carried out anaerobically, energy can be generated through either combustion or fuel cell conversion, which capture 35% and 90% of potential energy respectively.

When calculating greenhouse gas emissions, carbon dioxide emitted during the substrate oxidation that occurs in the anaerobic chamber of a MFC is ignored (Du et. al, 2007). The reason that this carbon dioxide is not considered is because that it is part of the natural carbon cycle. Meaning that this carbon dioxide was absorbed from the atmosphere by plants, which were either directly consumed by humans, or by animals the were eventually consumed by humans. Through the digestion process it was converted into waste, and then entered the wastewater system. The carbon dioxide will then be oxidized by the plant, and enter into the atmosphere. Ready to begin the cycle again. The net carbon dioxide lost or gained throughout this cycle is considered a relatively negligible amount compared to the amount of carbon dioxide that is being produced by fossil fuels (Graven, 2016). That carbon dioxide have been stored in solid form for millions of years beneath the Earth’s surface, making its introduction into atmosphere a destabilizing process.

Nutrient pollution: essay help free

In order to grow and survive, living things need nutrients.  Rising levels of nutrient concentration however, impacts the use of water that it is specified for.  Nutrient pollution is “a form of water pollution and refers to contamination by excessive base-level concentrations, act as fertilizers, causing excessive growth of algae.”  Nutrients come from many different places.  They can appear naturally as a result of rock weathering and from the ocean, from mixing water currents.

Some water bodies are naturally high in nutrients like bedrock that may have a surplus of phosphorus. But for those water bodies a dangerous cyanobacteria, blue-green algae, produces toxins that can be deadly to animals and people. Toxin produced by the cyanobacteria can harm the “nervous system, cause stomach and intestinal illness and kidney disease, trigger allergic responses and damage the liver.”

Nitrogen is an essential nutrient used by all living things. Over the past century the increase in population has increased the demand for food and energy. Meeting these requirements has increased the amount of nitrogen in the environment. Nitrate, the most common form of nitrogen, is directly toxic to humans. Infants who drink water with high nitrate levels can develop a life-threatening blood disorder called blue baby syndrome. High nitrate levels in water can also affect thyroid function in adults and increase risk of thyroid cancer. Excess nitrogen is a common drinking water contaminant in agricultural areas. Nitrogen pollution in the air from burning fossil fuels contribute to many respiratory problems for children, the elderly, people with lung ailments and marine life.

Nitrogen pollution has a number of consequences in coastal marine ecosystems. Among some of the consequences is, changing the type and species of plants that make the organic matter.  Food generates nitrogen in the environment as a product of both food production and food consumption. Food production leaves a legacy of nitrogen in the regions where it is produced. It is estimated that 10 times the amount of nitrogen is used during the food production process than is ultimately consumed by humans as protein. Much of this additional nitrogen is applied as fertilizer that can run off into groundwater, rivers and coastal waters.  The production of animal protein adds substantial quantities of nitrogen to the environment in the form of nitrogen-rich manure that decreases water quality in agricultural areas. Once food is consumed, it can contribute to pollution through the production and discharge of sewage; “ humans do not utilize all of the nitrogen contained in food.” The remaining nitrogen is lost as waste to septic systems or wastewater treatment plants. Since most septic systems and treatment plants do not effectively remove nitrogen from the waste, nitrogen flows into rivers and coastal waters where it contributes to water quality problems.  Once reactive nitrogen enters a watershed in food, or fertilizer, some of it is retained within the landscape, some of it returns to the atmosphere, and some of it flows downstream to coastal estuaries. The contribution of nitrogen to coastal waters from “atmospheric deposition includes nitrogen that is deposited directly to the estuary as well as nitrogen deposited on the watershed that ultimately is transported downstream to the estuary.” Coastal ecosystems are naturally very rich in plant and animal life. However, since the richness of saltwater ecosystems is naturally limited by the availability of nitrogen, excess nitrogen can lead to a condition of over-enrichment. The over-enrichment of estuaries promotes the excessive growth of algae.  This algae can cause dead zones in areas of marine life, where all living things die out.

Ways to reduce excess nitrogen could include, looking to reduce wastewater nitrogen instances. One solution could be to add a biological nitrogen removal (BNR) at wastewater plants. BNR is a “process used for nitrogen and phosphorus removal from wastewater before it is discharged into surface or groundwater.”  The BNR process would filter out any excess nitrogen and phosphorus that could potentially harm humans and wildlife.

Nitrogen pollution is increasing and contributes to many of our environmental issues. As a single nitrogen molecule goes through the environment, it adds to air quality decline, the adding of acids to soil and surface waters, and over-enrichment of coastal waters. Solving the nitrogen problem will require a “multi-pronged approach.” Adding nitrogen control technology to treatment plants would significantly reduce nitrogen pollution in river runoffs and the ocean. Efforts to make policies should include fiefforts to reduce airborne nitrogen emissions from vehicles and electric utilities and increased investment in improved wastewater treatment to address nitrogen pollution.

Officer-involved shootings: college application essay help

Pardon the pun here but really this topic is a load gun waiting to go off. It is such a heated debate that there is no way to really solve it or put it out. First off let’s get some information first so we are on the same page. A firearm is just another name for a gun or rifle. A Handgun is a gun that can be used with one hand. Now that we have that we can move on. I am going to talk a little bit about the history of firearms and then talk about the handguns as well as talk about police officer involved in a shooting and what they have to go through or, as they call it Officer-Involved Incident and I will also be talking about the guide lines that they follow.

In 1232 Chinese use gunpowder filled tubs as a weapon but it was more of a rocket then a gun. Now fast foreword to 1364 we see what we can call a gun and from there it was changed to what we know today. Now as I had said before I am going to talking about handguns such as the pistol. There has been a lot of mass shooting in the United States over the last 10 years. I am not going to say that has not been for that is a lie. At the same time they only count for a small part of all crime in the US in 21 year gap there has only been 70 mass shooting in the USA. There is more crime in one year then all the years I looked combined that is not to say they are not a problem but problem is not the guns it is the people holding the guns. While I am all for gun control, I feel there should a tighter requirement to getting a firearms. I am one that does not really care too much about guns but, there are a lot of people that do. A gun is nothing more than a tool in the right hand is a great tool in the wrong hands it a deadly tool. But, that can be said about any tool really

Moving on to the Open carry vs the Concealed carry, just as the names imply they are that to where anyone can see it you are opening carrying if, it is hidden then you are concealing it. I will also be talking about the some of the restrictions I feel should be put in place for firearms as well. I know this will not be at realistic in the littlest bit as there is the other way but legal way to get firearms, but it would help control and maybe hinder them as well. Please keep in mind this is for a perfect world and there is no black market. Limit the number of guns a person can own or at an address. I know that will not stop the mass shooting but they will have fewer guns to fire off. Ammo is something else that can be limited as well again I know this will not stop the shooting as well but what it will do is lower the people that are hurt. Those two things will help no matter what. Now one of the biggest thing that the government can do is make it harder for people to get guns as well and, how may you ask can they do that the simple answer is they really can’t but again this for the perfect world have anyone that wants a gun get a state of mind test done what it will do is show that person that wants the gun is in the right state of mind to have deadly tool in their hands. NICS dose the looking into the background of anyone that wants to buy a gun. Now that is out of the way we can move on to the open and concealed carry debate. Both have good points and bad points that is just way of anything. All states have concealed carry laws in which you must have a permit to do so. There some states that you can open carry without a permit and there is some sates that to carry a gun open or not you need a permit and then there is others that you can only concealed carry. Kansas is one that you don’t have to the permit while Texas is a concealed carry only state. Funny part is Oklahoma is between the two states in you must have a permit to carry no matter what. I am not going to lie and say that Texas is right in what they do or Kansas or even Oklahoma that is up to you really. This is my thoughts on this whole thing I think Oklahoma has the right idea on how it is done it does not take away from the gun owners but at the same time it keeps Joe Smoe on the street safe as well as any nut case can’t have a gun with them, also it makes sure everyone is being safe with their guns as well. That is really all I am going to say on that. When you break it down into which states fall into what group there are more without permits then you a permit there is even less that will not let open carry at all. Thirty-one states will let you carry without a permit twelve with a permit and seven that will not let you open carry at all D.C. is one of them that will not let you open carry. Let me clear up something before I go on. I am not for the out lawing of guns just for tighter control on getting a gun but, on the flip side I know that if a person really wants a gun they will get it no matter what end of story.

Any time an officer is involved in a shooting or anything they like that use the IACP which is International Association of Chiefs of Police they have a set of guide lines that they follow. I found a former chief of police for Herington and ask him about what he felt keep in mind this was not a shooting he had to deal with and he said “I never had to deal with a shooting while the chief of Herington but I had to go thought it with an officer.” Gordon the guy I talked to said for this officer involvement while he could have use another agency he decide to use the KBI because they had more man power and more know how to deal with it he felt that it was a better chose. One of the reasons why they will bring in another agency is so this way the public can’t say they are trying to protect the officer and the other reason is they are non-byest now in a lager city they may have an agency that all they do is deal with officer involvement but as small town we don’t and I don’t know for sure on the lager cities. To find the whole thing on what happens when an officer is involved in a shooting you can look up online I will give a short summery of what happens when officer shooting happens. Please keep in mind IACP are nothing more than guild lines and should use case by case. One of the biggest things that need to happen is seal off the area so no one can mess with evidence. Now this may also include the officer’s gun it may not depends on other factors. All witness’s statements are taken independently and the witnesses are also split part to make sure there is calibrations also Gordon said any videos like dash cams are taken and kept safe so they can’t be tampered with. None of this is done by the officers that work with officer involved in the shooting this done by the agency doing the invitation. In the IACP also recommend that they also see psychological to help prevent psychological and emotion problems form happing down the road. Let me pause here for a second and explain why I keep saying agency rather then something else Gordon said “he feels that any officer incident should be handle by an outside source” so the investigations is far. One thing is the officer is not required to see anyone. The officer can talk about at any time they like with in the first 24 hours. If you can’t tell a lot of this so they don’t really put presser on the officer and don’t make could be a bad situation worst that is fine. In that we don’t need someone with the training that an officer goes through to become a threat to the public. While they still can become one it is hopeful that the academy makes it so they do not become part of the police force there are some that do make it but on a whole it dose filter them out. Bad new sells better than good news why I am not sure but that is case is it right no, it is not the media has found out what sells and what does not. Dose it feed itself maybe a little but to tell the truth have we became nothing more where we would do anything to get fifteen minute of fame. The answer I have to give is yes we have and it saddens me to no end.

Doctor’s facility administration framework – model era and acceptance

1. INTRODUCTION:

Healing facility are the vital piece of our lives, giving best medicinal offices to individuals experiencing different diseases, which may be because of progress in climatic conditions, expanded work-load, passionate injury stress and so on. It is vital for the healing centers to stay informed concerning its everyday exercises and records of its patients, specialists, medical caretakers, ward young men and other staff personals that keep the doctor’s facility running easily and effectively. In any case, staying informed regarding every one of the exercises and their records on paper is extremely awkward and mistake inclined. It likewise is extremely wasteful and a period devouring procedure Observing the consistent increment in populace and number of individuals going to the healing facility. Recording and keeping up every one of these records is very untrustworthy, wasteful and lapse inclined. It is additionally not monetarily and in fact practical to keep up these records on paper. In this way keeping the manual’s working framework as the premise of our task. We have built up a computerized adaptation of the manual framework, named as “Healing center Management System”. The primary point of our undertaking is to give a paper-less healing center up to 90%. It likewise goes for giving ease dependable mechanization of the current frameworks. The framework likewise gives fantastic security of information at each level of client framework cooperation furthermore gives vigorous and dependable capacity and reinforcement offices.

1.1. PURPOSE OF THIS DOCUMENT:

The objective of this undertaking was to take a doctor’s facility administration framework from prerequisite era, through model era and acceptance. This record portrays the procedure taken an exhibits the subsequent information. It likewise expresses the different requirements which the framework will be with stand to. This archive further prompts clear vision of the product prerequisites, particulars and abilities. These are to be presented to the improvement, testing group and end clients of the product.

1.2. OBJECTIVES AND IDENTIFICATION:

The venture “Healing facility administration framework” is expected to create to keep up the day –to-day condition of affirmation/release of patients, rundown of specialists, reports era, and so forth. It is intended to accomplish the accompanying goals:

1. To automate all insights with respect to patient subtle elements and doctor’s facility points of interest.

2. Scheduling the arrangement of patient with specialists to make it helpful for both.

3. Scheduling the administrations of particular specialists and crisis appropriately with the goal that offices gave by doctor’s facility are completely used in successful and effective way.

4. If the therapeutic store issues medications to patients, it ought to diminish the stock status of the medicinal store and the other way around.

5. It ought to have the capacity to handle the test reports of patients directed in the pathology lab of the healing facility.

6. The stock ought to be redesigned consequently at whatever point an exchange is made. The patients’ data ought to be stayed up with the latest and there record ought to be kept in the framework for authentic purpose

1.3 ROLES AND RESPONSIBILITIES:

The project team is formed by the following students: ———— and ————. Since all team members have the same experience and expertise designing systems, the work was divided equally and assigned arbitrarily.

1.4 SCOPE:

The proposed programming item is the Hospital Management System (HMS). The framework will be used as a part of any Hospital, Health centre, Dispensary or Pathology laboratories in any Hospital, Clinic, Dispensary or Pathology laboratories to acquire the data from the patients and after that setting away that info for upcoming use. The present framework being used is a paper-based framework. It is too moderate and can’t give upgraded arrangements of patients inside of a sensible time period. The framework’s goals are to decrease after some time pay and expand the quantity of patients that can be dealt with precisely. Prerequisites articulations in this report are both useful and non-practical.

1.5 METHODOLOGY, TOOLS AND TECHNIQUES:

Crude information (otherwise called essential information) is a term for information gathered from a source. Crude information has not been subjected to preparing or whatever other control, and are additionally alluded to as essential information. Essential information is a kind of data that is acquired specifically from direct sources by method for reviews, perception or experimentation. It is information that has not been already distributed and is gotten from another or unique exploration think about and gathered at the source, for example, in promoting. Essential information gathering are watched and recorded straightforwardly from respondents. The data gathered is straightforwardly identified with the particular exploration issue recognized. Every one of the inquiries that one asks the respondents must be absolutely unprejudiced and planned so that all the distinctive respondents comprehend it.

1.6. SYSTEM DESIGN:

In this product we have added to a few structures. The brief portrayal about them is as take after:-

Gathering:

The gathering module handles different enquiries about the tolerant’s confirmation and release subtle elements, bed registration, and the persistent’s developments inside of the doctor’s facility. The framework can likewise handle altered expense bundle bargains for patients and in addition Doctor Consultation and Scheduling, Doctor Consultancy Charges and Time Provision.

• Doctor appointment plan

• Doctor Appointment Scheduling

• Enquiry of Patient

• Find History of Patient Enquired.

Organization:

This module handles all the expert section points of interest for the doctor’s facility prerequisite, for example, meeting subtle element, specialist specialization, consultancy expense, and administration charges.

Representative:

• Employee Detail Recording.

• Doctor Type.

• Doctor Master

• Referral Doctor

Drug store:

This module manages every single restorative thing. This module helps in keeping up Item Master, Receipt of Drugs/consumables, issue, treatment of material return, creating retail bills, stock support. It likewise helps in satisfying the prerequisites of both IPD and OPD Pharmacy.

Research center:

This module empowers the support of examination solicitations by the patient and era of test results for the different accessible administrations, for example, clinical pathology, X-beam and ultrasound tests. Solicitations can be produced using different focuses, including wards, charging, test gathering and the research center getting point. The research center module is coordinated with the in-patient/outpatient enrollment, wards and charging modules.

Enlistment:

This module helps in enlisting data about patients and taking care of both IPD and OPD understanding’s inquiry. A special ID is produced for every patient after enrollment. This aides in executing client relationship administration furthermore keeps up medicinal history.

1.7 SOFTWARE REQUIREMENT SPECIFICATION:

A Software necessities particular (SRS), a prerequisites detail for a product framework, is a finished depiction of the conduct of a framework to be created and may incorporate an arrangement of utilization cases that portray connections the clients will have with the product. Likewise it additionally contains non-practical prerequisites. Non-useful necessities force limitations on the outline or execution, (for example, operational efficiency prerequisites, quality principles, or configuration imperatives).

2. BACKGROUND AND ANALYSIS:

2.1. EXISTING SYSTEM:

Hospitals at present utilize a manual framework for the administration and support of discriminating data. The present framework requires various paper shapes, with information stores spread all through the healing facility administration base. Regularly data (on structures) is inadequate, or does not take after administration benchmarks. Structures are regularly lost in travel between divisions requiring a far reaching evaluating procedure to guarantee that no basic data is lost. Different duplicates of the same data exist in the doctor’s facility and may prompt irregularities in information in different information stores.

2.2. PROPOSED SYSTEM:

The Hospital Management System (HMS) is intended for Any Hospital to supplant their current manual, paper based framework. The new framework is to control the accompanying data; quiet data, room accessibility, staff and working room timetables, and patient receipts. These administrations are to be given in a productive, savvy way, with the objective of lessening the time and assets as of now required for such errands.

Karl Marx and social inequality: college essay help

Karl Marx and social inequality.

Marx was born in Prussia in May 1818 and was one of nine other children born to his mother and father, Heinrich and Henriette. Marx eventually was baptized as a Lutheran, as he believed Protestantism came with an equal intellectual freedom (Biography.com eds, 2015). The work and ideas of Karl Marx have greatly influenced my understanding of the ‘widening gap between rich and poor’ in our society today. His writings in the Communist Manifesto, for me, helped to explain the economic and social divides we see throughout our societies, “The history of all hitherto existing society is the history of class struggles” (Marx.K, Engels.F et al, 1992).

Bourgeois and Proletarians

Throughout the Communist Manifesto, Marx refers to the divide in society between the wealthy businessman and the worker, the oppressor and the oppressed (Cohen, 2013 pp125-127). In Marx’s eyes he believed that one could state their class by the association they held to the means of production and distribution. If they owned land, property or capital they would be considered high up in social status as they had the means to produce and distribute goods. This could be starkly contrasted with the ‘proletarians’ who only had their time to sell and distribute. These observations and questions raised by Marx greatly helped my understanding in the continuously widening gap between rich and poor in my society. Even in this day it is evident that business owners and wealthier people have all the power in our societies while the working class are left with the majority of the population and a minority of the power. An example of this exploitive relationship that we see today is zero hour contracts in industry, this is not an equal power agreement as the employer decides how many hours  the employee(proletarian) must work, whether its full-time or zero hours (Gorrey, 2015). Marx also wrote that to enact a social change the working class must recognise each other’s oppression and work together to resolve the problems. In my opinion this social change can be seen in today’s society with the likes of trade unions in the public sector. This significantly influenced my understanding of the other side of the widening gap of rich and poor in my society, which is in my belief the social change which Marx referred to in this writing. We can also look at why the worker must obey the boss, and the answer is because they have nothing else to sell but their own labour and time while their boss owns a significant amount of capital and therefore has significant power (Marx, Engels, 1970). This is once again seen in contemporary society as in the workplace the boss owns the majority of the capital and the work must give their time to earn wages.

Contradictions of Capitalism

The contradictions of capitalism Marx refers to also helped my understanding on the widening gap between rich and poor in contemporary society. The inevitability of monopolies in our economy ensures that only a select few have large amounts of capital which would increase the divide between rich and poor even to this day. Also due to the lack of centralised planning, over-production of goods can take place. Which could cause high inflation which would once again further the gap between rich and poor in our society, as the poorer majority of people would suffer a bigger loss from inflation as the have less capital to start with. The constant introduction of lower wages for workers ensures their pauperisation and definitely widens the gap between rich and poor. And finally the biggest factor in widening the gap between rich and poor even in contemporary society is the fact that the state is controlled by the Bourgeoisie. This effects the poorer classes of society greatly as in some circumstances laws can be passed in favour of and to only help the wealthier classes to stay wealthy (Molyneux, 2007). This in turn causes the Proletariat to band together in objection against this, examples of this can be seen in contemporary society in the likes of industrial action and protests.

In conclusion, Marx writings and theories on the Bourgeoisie and Proletariat and the four contradictions of capitalism have greatly influenced my understanding on the widening gap between rich and poor in contemporary society. My understanding now of the widening gap between rich and poor is, as long as one group in society (Bourgeoisie) hold all the power and capital it is impossible to have a fair and equal society without any gap between rich and poor.

Bibliography

Biography.com eds, (May 2015) Karl Marx Biography. Available at: http://www.biography.com/people/karl-marx-9401219 (Accessed: 09 October 2015)

Marx, K., Engels, F., Moore, S., & McLellan, D. (1992). The Communist manifesto. 18edn.Oxford: Oxford University Press.

Cohen.Robin, Kennedy.Paul (2013) Global Sociology. Washington square: New York.

Gorrey.Terry, (2015) Working Time and Minimum Rest Periods in Irish Employment-What You Need to Know. Available at:  http://employmentrightsireland.com/tag/zero-hours-contracts/   (Accessed: 09 October 2015)

Marx.Karl, Engels.Friedrich (1970) The German Ideology, edn-reprint, Ed.Arthur,Christopher

Molyneux.John (2007) the Contradictions of Capitalism. Available at: http://johnmolyneux.blogspot.ie/2007/03/contradictions-of-capitalism_14.html (Accessed: 11 October 2015)

Conjoined twins or twinning

Imagine having a sibling, waking up with you every morning to find your sibling didn’t go anywhere, but was by your side. Imagine having to do the same things, for instance, riding on the same bike or sitting in the same chair. How do you think life would be like? Would you be able to have any freedom? Do you sometimes feel like you want to disconnect from your sibling? Well, such people are called “conjoined twins”. Conjoined twins share arms, legs, organs, and other body parts. However, they don’t just share these body parts. Instead, they share and take intimacy to the extreme. Back then, they were called gods and feared as monsters. People were afraid they might kill or abandon their kids”. Conjoined twins are formed in the last stage of mitosis, which is called Cytokinesis. Many cases have been documented about conjoined twins. One example is a case of two conjoined twin sisters’ named Mary and Eliza Chulkhurst. The world’s most known twins are called the Thoracagopagus twins. In order to separate conjoined twins, a special procedure has to be done, called a “surgical separation”.

To get to the point, let’s break down the word “conjoined twins or twinning” into small chunks to better understand its context. “Twin” comes from Old English meaning to “double” or “two together.” (Check this sentence for correct citation).Twin could also mean, two children who were born at the same time to the same mother. What are conjoined twins? This is a good question. Conjoined twins a.k.a Siamese twins are, “One of two people (identical) born with their bodies joined together by the utero (in Latin “in the womb”). This term (Siamese) is now considered offensive, and is better if said: conjoined twins (“The Oxford Guide to Practical Lexicography” 425).

Although, many conjoined twins are born connected during fertilization, the separation begins in the last stage of Mitosis. Mitosis is a type of cell division that results in two daughter cells having the same number of chromosomes. When the cell divides, two completely different twins are formed. Conjoined twins are formed in the early stages of releasing the egg in the embryo. After the egg gets released, the separation of the egg occurs. How does the separation occur? The separation begins during the last stage of Mitosis. It is called Cytokinesis.

Cytokinesis is when the two daughter cells separate to form twins. First and foremost, in order for conjoined twins to be identical, both genders need to be the same sex. For example, if a mom gave birth to two conjoined twins who were girls, then they would both be from the same gender. Though, how can conjoined twins be the same sex? Well, before the mom gives birth, the egg grows and separates into two fertilized eggs and they share the same amniotic cavity and placenta as well. An amniotic cavity is a closed sac in between the mom’s womb and embryo that contains the amnion fluid. Basically, the fluid-filled space between the amnion and the fetus (“Amniotic Cavity”). The fluid is usually a term women refer to before they give the actual labor or delivery of the baby. They refer to this term as, “I’m broke, and I need to see a doctor.” On the contrary, conjoined twins rarely turn out to be fraternal twins. Most of the time, they turn out to be identical twins at birth.

In the history of conjoined twins, there have been many documented cases of the early twins that existed. Evidently, “One of the earliest documented cases of conjoined twins were Mary and Eliza Chulkhurst. They were born in Biddenden, County of Kent, England in the year 1100, and were joined at the hip (“Facts About the Twins”). These sisters were wealthy; they lived up to 34 years. When Mary and Eliza died, they left a fortune at England’s church. In honor of their generosity, it was a tradition for people to bake biscuits and cakes in remembrance to their pictures and give it to the ones who were in need. The most popular type of conjoined twins are named, Eng and Chang Bunker. Eng and Chang were both born in Thai, today Thailand. These twins were later named after their country, Siam. Shortly, they achieved fame by leaving the country Siam as teenagers. They were connected at the chest.  Before Eng and Chang settled in the U.S., they used to perform at circus shows around the globe. Years later, the twins married two other conjoined twin sisters’, and they had two dozen of kids. At that time, many conjoined twins were classified differently by groups or names (Chang and Eng).

In today’s world, you will see different types of conjoined twins. “There are nearly a dozen different types of conjoined twins. One of the most common classifications are the Thoracopagus twins. These twins are connected at the upper portion of the torso (“Facts About the Twins”). These twins shared one heart. For most twins, it is nearly impossible and difficult for them to be alive at the time of a surgical procedure. The second type of twins are called the Omphalopagus twins. With these twins, they are connected from the breastbone to the waist. Unfortunately, these twins share only one liver, but rarely a heart. Craniopagus twins are joined at the head, and are the rarest of twins to live. In fact, only a small amount of conjoined twins will be joined that way, which might make most surgical separations a stressful task to do in the operating room.

After birth, every conjoined twin needs a professional surgeon to do a procedure called, “surgical separation” in order to separate. A surgical separation is done when two twins are connected. The surgical procedure is risky, and requires accurateness and care. Therefore, the decision to separate twins is a serious one (“Facts About the Twins”). Having a strong decision to separate twins is important. Just saying yes, is not enough. Parents at the time of the operation should ask many questions to better understand how the surgical separation is done. This procedure should be dealt with more seriously, than any other operation done in history. Sometimes, the operation might take some time. It might take hours, days, or even months. Nowadays, surgical separations are rare and many do not survive. The following statistic proves so. “Since 1950, at least one twin has survived separation about 75 percent of the time” (“Facts About the Twins”). During birth, the doctor checks the pregnancy using an ultrasound, or M.R.I (Magnetic Resonance Imaging), to check how these twins might be separated or what organs they share that might make the procedure difficult or risky to perform after they are born.

In order to make the separation possible, doctors must assess and see what organs function, and what the functions might be doing. Separation sometimes can be life-threatening, and may even lead to death for some twins. After the operational procedur