Performance Optimization By Using HSLD- A*searching Technique In Hybrid Intrusion Prevention System

Abstract.

A specifically designed and configured firewall is a good initiation for securing a computer network from malicious users. However complex network environments that hold higher number of participants and endpoints uses multiple undefined dynamic channels require better security infrastructure. Intrusion Detection Systems (IDS) is proposed a solution to deal with multiple threats and direct-indirect attacks along with run time problem such as buffer overflow, String Vulnerability and starvation in less time. Here major problem is speed and our work is focusing for quick search & response by using heuristic-A*search. A*searching technique checks the heuristic admissibility through evaluation function in very short interval. This technique role is to find the target value from the discrete tree structure created by security scanner and place that value into the threshold table for final comparison and remedial action to be taken. As the structure of Hybrid prevention system, it is showing huge complexity related to its performance due to its multilevel structure towards searching and segmentation point of view. Our paper is showing the possible solution by giving the example of distance and optimal path between two cities from the available path.

Keywords: Intrusion System, IDs, Firewall, k-mean mining technique, A* Search, Heuristics search.

1. Introduction

ID system which is use to find the n- categories of malicious inputs via different inputs channels using different security mechanisms and provides safeguards over the network. Major work of any IDS is always analyze the traffic which entering into the network and start differentiation between original packets packet and malicious data[6][9][10]. Here the system classifies the attack identification methods into following ways: flow based, abnormal time base, behavioral based patten. ID system collects and analyzes required information from various components which is deployed for the different purposes over network to identify possible threats that leads the networks system insecure[11][13][14]. With the different network logical configurations, several IDS exist and are reliable in detecting various suspicious actions by different sources. Whenever IDS finds the suspicious packet, instantly it creates an alert. Genetic Algorithm reveals the technical methods for intrusion detection with reliable response. The network connection information is encoded to transform into rules in IDS. Finally result of application of GA is presented. Baker [1][2] has discussed the issue for synthesizing the intrusion by using custom computing machine along with pattern matching and time base intrusion but how it is optimally solved it is upholded. we can find the analysis of Intrusion detection with Network processor (NP)-based network devices which are increasing gradually. From [30][31],Ubicom Network Processor is discussed and expose the embedded Network Intrusion Detection System (NIDS). From [22][23] Li Yong and Gao Guo shown how the New Intrusion Detection Method can be linked with Improved methodology of DBSCAN and novel rule-based data mining. Day by day IDS is getting serious attention towards network security and over viewing the several drawbacks and providing the forecasting for the upcoming problems., that may be the source from signature based anomaly, or host based intrusion otherwise network based, but reason may anything but real job is to do fair work for intrusion detection and prevention. The functioning of triggering alarm in the dynamic world can be mapped to the performance of IDS in the digital era. The IDS should be always updated due to its collision between attackers and security fencing reason is every time attackers is innovative new way to attack and in same way security personal other enhanced dimension to secure the networks from malicious inputs. Due to the huge use of the Internet and online trading has made any organizations more susceptible to virtual threats than ever before which leads the violation of data integrity along with loss of customer confidence linked with job productivity degradation, and conclude with financial crises for the company. According to the 2004 CSI/FBI Computer Crime and Security survey, organizations that acknowledged financial loss due to the attacks (269 of them) reported $141 million lost, and this number has only grown since. [16][17]From the parallel hybrid intrusion prevention system it is shown the optimal results by using multilevel hybrid approach and implementing he k-mean threshold approach which was presented in the international conference FGIT-2015 Korea. Defining the component as follows: (1) Anomaly IDS (2) Signature based IDS (3) network based (NIDS) (4) host based (HIDS) intrusion detection systems. Apart from all IDS we are using few more software devices which will help to secure the system with more optimality by using (5) flow based detector and (6) time slot based detectors [19][20][21]. Several systems are designed and constructed but all they suffer with same issue that is traversal and creation of clusters for diverse data. After collecting facts we can say that system may suffer from the time complexity which is going to be solved by our proposed system with the help Heuristic straight line distance and A* search.

2. Related Work

Researches through tools and empirical methodologies for network research processing activities are mainly focusing on modularity concepts, reusability and ease to logical programming. The researches in this section show how intrusion detection can be performed on a network processor. Network Intrusion Detection avoidance system is described in several papers and in paper [30, 31] the authors proposed the concept of avoidance and conclude that the avoidance will be successful if the implementation of NIDS differs from the endpoint implementation. Most existing IDS are optimized to detect attacks with high accuracy. However, they still have various disadvantages that have been outlined in a number of publications and typical work has been done to analyze IDS in order to direct future research [27, 29, and 32] .Besides others, one drawback is the large amount of alerts produced some of which are redundant and unnecessary. Intrusion prevention system is technique which is used to prevent the system from the illegal activity which comes from the different sources of data. So many algorithms are designed that is either very complex to implement in hardware with slow performance [12, 15, 16]. Already lots of work and publication have been done on different type of attacks over the network many works have been done and published if we discuss about clustering and data mining techniques. Baker [2, 16] discussed the Intrusion tolerance in distributed computing and B.Linge said the application on intrusion detection based on K-means clustering technique and how k-Mean can be partition the large database for new finite data for measuring the centroid value to stop unknown new attacks .Yu Guan [18] Who worked on different mean technique by introducing Y-means algorithm clustering technique for finding and analyzing the intrusion activity? Many works have been done for IP flow-based and packet-based intrusion detection system performance in complex and high speed networks [14][15]. Chitrakar and Huang [20] had given a proposal of hybrid learning approach for integrating the feature of k-medoids technique and bayes classification rule for data partitioning and data distribution for the cluster formation and processing. Huang also focused SVM classification for anomaly detection and represents the real world scenario of data distribution. Apart from k-mean and SVM technique, Gao Guo-Hong [21] proposed a new technique that is enhanced intrusion detection model based on DBSCAN which describe the cluster formation based on density with the constraint of core points, border points and noise points to process the cluster.

Flow Based Detector:

Flow-based technique is used to control and monitoring the network traffic and provide reliable communication over the network with security.

Time Slot Based Detector: This is coming in the form of race condition and resulting in the form of resource conflict and this situation arises when one process would like to beat another program to certain events, for static detection and formulating the deadlock into the system.

Behavior Based detector: Behavior detector works with four principle for different purposes, base on nature of the inputs it defines what type of checking is require in which it is conduct the test for four cases.1Anomaly Detection, 2Signature based Detection, 3Host-Based Detection, 4Network based Detection

3. Review of related Factors:

1. Decision Tree Learning: Basic Idea behind the DTL is to test the most important attribute first. The most important is saying that which makes the most differences to the classification. And this classification must be correct and generated from small no. of test samples to represent the tree as a very small data structure.

2. Searching Methodology: Searching can be defined as the conducted test or operation which finds the location of the desired value from the memory tree. Search may be successful or unsuccessful according to their test sets. Searching is a very difficult job due to complex data structure of memory where the first data stored and some time searching leads to very worst time complexity. So before going to adapt any technique we must analyze the optimality for a given problem. Some basic operation of the any searching method is searching for node, expansion of the current state, generation of new states if value not found with available states.

3. Searching Strategy : Expansion and generation of new states is defined by the searching strategy with the assistance of following components:

a) State: A state corresponds to a configuration of the given data structure or network system.

b) Parent/root node: Generally it can be define as a initial node and leads to generate new nodes according to their requirement. Sometimes we use tree pruning technique to reduce the complexity of the tree.

c) Action: This defines the particular instruction which is applied on the parent node for generating the next level of knowledge by creating new node.

d) Path Cost: This cost can be defined as the total cost occurred to reach the destiny node pointed by the parent node and can be represented as g(n).

e) Depth: It defines the total number of steps has been traversed from initial point to desired value.

1. BFS: This is use for discrete graph structure and tree traversal searching. Principal behind the BFS is nodes that holds the least evaluation measure distance function to the goal and all it is happened by using evaluation function.

Adjacency node lists:

A: B, G, H

B: C

C: H

D: C, E

E: DESTINY

F: I

G: F, H

H: B

I: C, H

From the diagram it is clearly showing the optimal path among all available path from A to E :

A– G  F ID- E

Total cost is = 10 +7+5+7+20

= 49

BFS is good in practice but in some cases it gives inaccurate result. In some cases evaluation function is giving exactly accurate result then it means that it is retuning as a best desired node, but in some cases its disappear which lead the in accuracy.

2. G-BFS: It performs to examine the specific node which is closest to the specific desired node. Basic reason is that this is supposed to lead solution outcomes quickly.

Evaluation function is : f(n)=h(n)

3. GBFS is used and implement to find malicious inputs (intruder) in small session to minimize the time complexity during huge input lines interaction. It is similar to DFS because it prefers to follow a single path all the way to reach the goal node, but will start backup when it hits a dead end. GBFS having the same problem like DFS but it is not optimal reason is it starts down an indefinite path and never returns to other possibilities.

4. Further next to BFS and GBFS algorithm that leads to evaluation function is heuristic function symbolize as h(n) .

5. Heuristic Search: Heuristic functions are most common form which is embedded with additional knowledge of the related problem to enhance the more effective searching. This additional information is works like flag for spotting the malicious input in early stage only, h(n) can be defined as the estimated cost of the cheapest path from node n to goal node. And it can be represented in the data set by measuring the distance threshold deviation with k-mean simple threshold value. Here we measure the max-min threshold value to determine the Negative and positive deviation impact.

4. Proposed System

In proposed system we are showing the potential of our present paper is basically working for speed by using informed search strategy to reduce the time complexity that work with and can be define in such a way that “it uses problem specific knowledge, and can give the optimal solution compare to other.

When heterogeneous input pass through the security scanner of hybrid intrusions system, then at that movement the analyzer will get active and start categorization of different input to different type of FBD/TSBD/BBD.

This is very challenging job reason is to filter from the continuous input stream and searching is so difficult, so our approach is work to very fast for good response by security scanner and make them efficient. Here working methodology is to collect the entire data by using A* search and heuristic efficiently.

These both techniques work is to collect data searching rapidly and create the tree structure with label data indexing. Once data is stored into the tree then it will be pass through Heuristic search if it is admissible then only proceed to A* search.

This search will find the most optimal value in the less time period and send to the table for comparing the k-mean threshold value with the std_deviation (min-max)value follow by being trigger events.

Working Module:

Input Streams : Undefined different combination of inputs stream which comes from several resources and may create the threats to the personal system, database or networks.

Security Scanner: This is device which is a integrated combination of hardware and software. This device is deployed before the firewall over the network for performing the categorization of inputs and assigns this input to various intrusion detection techniques to identify nature of inputs.

Creating Data structure Indexing: This is used to store the data in the memory temporarily with well define discrete data structure and assign the labeling flag value as a regional index for easy access.

Heuristic admissibility by A* search: A*search basic used to minimize the total estimated solution cost. This can be defined as the advance feature of BFS and show the optimality of A*, if and only if heurist straight line distance is admissible optimally. HSLD can’t define the direct value it must have to run with correlated and prior knowledge before prediction of the desired inputs.

It evaluates nodes by combining g(n) and h(n).

g(n): Cost to reach the node

h(n): Cost to get from the node to the goal

such as f(n)=g(n)+h(n) //estimated cost of the cheapest solution through n.

So, if any user wants to find the cheapest solution then A*search is complete and optimal technique.

Figure (2 ).,

The figure representing that here all the available path its not possible that will go for direct search. We need correlated value through which we can predict the next optimality, which is done heuristic and if heuristic is able to check the straight line distance feasible then it assumed to be admissible. hence A * can be optimal if heuristic is admissible.

Table (1).,

S.no Type of Detection Categories of inputs k-mean threshold value Standard deviation (min-max)value Privilege to enter imto n/w (Y/N)

1 FB D Bufferover flow 1.6279

≤ 2.000 Y

Unexpected flow 0.9876 Y

Abnormal Input 0.10176 Y

Total=mean_ ¥value (2.717/3=0.905) Y

2 TSBD Race Condition 2.0179

≤ 2.000 N

Total=mean_¥value (2.0179) N

3 BBD Anomaly 0.0198

≤2.000 Y

Signal 1.731 Y

Host 2.091 Y

N/w Based 1.311 Y

Total=mean_ ¥value(5.1528/4=1.280) Y

From table() show the hybrid structure of intrusion prevention system

Our work is running behind the time complexity during searching of data from data tree. Here we are using he heuristic and A-star search for desired value searching.

The optimality of A* is a straight forward to analyze if it is implemented with tree search and can define its optimal if h(n) is an admissible heuristic, then it shows that h(n) never overestimates the cost to reach the goal value. By nature itself admissible heuristic search is optimistic because they think that cost of solving the problem is less than it actually is.

Since g(n) is the exact cost to reach n, we have as immediate consequence that f(n) never overestimates the trust cost of a solution through n.

Proof:

We can proof logically if straight line distance optimal, using search in given Tree is optimal if h (n) is admissible.

Step 1: Let us take, destiny node ND appears on the fringe and let the cost of the optimal solution be cst*.

Step 2: Then here both ND becomes suboptimal and h (ND) =0

Step 3: We know it is then,

f(ND)= g (ND)+ h(ND)=g(ND) > optimal solution cost cst*.

Step 4: If, h(n ) does not over estimates the cost of completing the solution path, then we know that f(n)=g(n) +h(n) ≤ optimal solution cost cst*

No, we have shown that

f (n)c*< f(ND),

Step 5: So, ND will not be expanded and A*must return an optimal value.

Further,

This value will be kept in specified test sets , which will pass through the k–Mean-Simple threshold Algorithm. This technique is work by conducting

k-partitioning for given data sets. Then first select the centriods value if given otherwise user can itself declare a initial centriods. Now each point can be assigned with closest and nearest value that is centroid, and all collection of points assigned to a centroid is a cluster. The normal value is given by the value of the value metric table. This table is responsible to compare and measure the actual k-mean threshold value with predefine standard deviation value and base on the result it is liable to decide which action or privileges has to be assigned to every individual input streams.

5. Conclusions

After lots of survey and discussion made on different type intrusion detection technique associated with searching technique which is representing the different time complexity and space complexity data says that normal searching technique cannot be apply on different dynamic inputs for searching and categorizing the desired data. So if we are success to construct one hybrid system with dynamic searching technique for intrusion detection with reduced the time complexity then this will be proven module in the field of Data security-Intrusion detection system. Finally if we are success to construct dynamic searching for hybrid system which can work in any environment then only time as together cost will also be minimize with optimality.

6. Future work

Performance optimization by using HSLD- A*searching technique in Hybrid Intrusion Prevention System will be use in future in Network security for optimal searching and response along with analyzing the multiple threats in virtual environment.

7.References

1) Z. K. Baker and V. K. Prasanna. A methodology for synthesis of efficient intrusion detection systems on FPGAs. In Proceedings of the Field-Programmable Custom Computing Machines, 12th Annual IEEE Symposium on (FCCM’04), pages 135–144. IEEE Computer Society, 2004.

2) Z. K. Baker and V. K. Prasanna. Time and area efficient pattern matching on FPGAs. In Proceeding of the 2004 ACM/SIGDA 12th International Symposium on Field Programmable Gate Arrays, pages 223– 232. ACM Press, 2004.

3) A. Baratloo, N. Singh, and T. Tsai. Transparent run-time defense against stack smashing attacks. In Proceedings of the USENIX Security Symposium, June 2000.

4) C. R. Clark and D. E. Schimmel. Efficient reconfigurable logic circuits for matching complex network intrusion detection patterns. In 13th International Conference on Field Programmable Logic and Applications, Sept. 2003.

5) S. A. Crosby and D. S. Wallach. Denial of service via algorithmic complexity attacks. In Proceedings of USENIX Annual Technical Conference, June 2003.

6) M. Gokhale, D. Dubois, A. Dubois, M. Boorman, S. Poole, and V. Hogsett. Granidt: Towards gigabit rate network intrusion detection technology. In Proceedings of the Reconfigurable Computing Is Going Mainstream, 12th International Conference on Field-Programmable Logic and Applications, pages 404–413. Springer-Verlag, 2002.

7) B. L. Hutchings, R. Franklin, and D. Carver. Assisting network intrusion detection with reconfigurable hardware. In Proceedings of the 10 th Annual IEEE Symposium on Field-Programmable Custom Computing Machines (FCCM’02), page 111. IEEE Computer Society, 2002.

8) K. Mai, T. Paaske, N. Jayasena, R. Ho, W. Dally, and M. Horowitz. Smart memories: A modular reconfigurable architecture. In Annual International Symposium on Computer Architecture, June 2000.

9) Xinidis, K., Anagnostakis, K.G., and Markatos, E.P., “Design and implementation of a high performance network intrusion prevention system“, Proceedings of the 20th International Information Security Conference (SEC 2005), Makuhari-Messe, Chiba, Japan, May 30 – June 1, 2005.

10) Sproull, T., and Lockwood, J., “Wide-area hardware-accelerated intrusion prevention systems (WHIPS)“, Proceedings of the International Working Conference on Active Networking (IWAN), Lawrence, Kansas, USA, October 27 – 29, 2004.

11) Song, H., and Lockwood, J.W., “Efficient packet classification for network intrusion detection using FPGA“, Proceedings of the International Symposium on Field-Programmable Gate Arrays (FPGA’05), Monterey, California, Feb 20-22, 2005.

12) S. Axelsson, “Intrusion Detection Systems: A Taxomomy and Survey,” Tech. report no. 99-15, Dept. of Comp. Eng., Chalmers Univ. of Technology, Sweden, Mar. 20, 2003.

13) Debar, H., Wespi, A.: Aggregation and correlation of intrusion detection alerts. In: 4th Workshop on Recent Advances in Intrusion Detection. Volume 2212 of Lecture Notes in Computer Science. Springer-Verlag (2001), Zurich Research Laboratory, 2001. pp 85-103 (2001)

14) Deswarte, Y., Blain, L., Fabre, J.C.: Intrusion tolerance in distributed computing systems. In: IEEE Symposium on Research in Security and Privacy, Oakland. 20-22 May-1991, pp.110–121 (1991)

15) Dutertre, B., Crettaz, V., Stavridou, V.: Intrusion-tolerant Enclaves. In: IEEE International Symposium on Security and Privacy. Oakland-CA, May, 2002.pp.216-224 (2002)

16) M. Jianliang, S. Haikun and B. Ling.: The Application on Intrusion Detection based on K- Means Cluster Algorithm. In: International Forum on Information Technology and Application, Chengdu, 15-17 may 2009.pp.150-152 (2009)

17) Chapple, M.J., Wright, T.E., Winding, R.M.: Flow Anomaly Detection in Firewalled Networks. In: Secure comm and Workshop. 006 Baltimore, MD Aug. 28 2006-Sept. 1 2006, pp.1– 6 (2006)

18) Yu Guan, Ali A. Ghorbani and Nabil Belacel.: Y-means: a clustering method for Intrusion Detection. In: Canadian Conference on Electrical and Computer Engineering, Montral, Qubec, Canada, 4-7May 2003.pp.1083-1086 (2003)

19) Zhou, Mingqiang., HuangHui, WangQian.: A Graph-based Clustering Algorithm for Anomaly Intrusion Detection. In: 7th International Conference on computer science and education (ICCSE), ,Melbourne, pp.1311-1314 (2012).

20) Chitrakar, R., and Huang Chuanhe.: Anomaly detection using Support Vector Machine Classification with K- Medoids clustering. In: 3rd Asian Himalayas International conference, Kathmandu, Nepal.23-25 November 2012.pp.1-5 (2012)

21) Li Xue-Yong, Gao Guo.: A New Intrusion Detection Method Based on Improved DBSCAN. In: WASE International conference on Information Engineering, Beidaihe, Hebai, 14-15 August 2010.pp117-120 (2010)

22) Lei Li, De-Zhang, Fang-Cheng Shen.: A novel rule-based Intrusion Detection System using data mining. In: IEEE International conference on Computer science and Information Technology, Chengdu, 9-11 July 2010.pp169-172 (2010)

23) Zhengjie, Li., Yongzhong Li., Lei Xu.: Anomaly intrusion detection method based on K-means clustering algorithm with particle swarm optimization. In: ICM, Nanjing, Jiangsu, 24-25 September 2011.pp.157-161 (2011)

24) Kapil Wankhade., Sadia Patka., Ravindra Thool.: An Overview of Intrusion Detection Based on Data Mining Techniques. In: IEEE International Conference on Communication Systems and Network Technologies, Gwalior, 6-8 April 2013, pp.626-629 (2013)

25) Schaffrath, G., Sadre, R., Morariu, C.: An Overview of IP Flow-Based Intrusion Detection. In: IEEE Communications Surveys & Tutorials, 26th April 2010. pp. 343 – 356 (2010)

26) Jadidi, Z., Muthukkumarasamy, V. ; Sithirasenan, E. ; Sheikhan, M.: Flow-Based Anomaly Detection Using Neural Network Optimized with GSA Algorithm. In: IEEE 33rd International Conference on Distributed computing System, Philadelphia, 2013 .pp. 76 – 81 (2013)

27) Ravi Ranjan., G. Sahoo.: A new clustering approach for anomaly intrusion detection. In: International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.4, No.2, Mesra, Ranchi, March (2014)

28) Malek, S.F, Khorsandi, S.: A cooperative intrusion detection algorithm based on trusted voting for mobile ad hoc network. In. 2013 21st Iranian Conference on Electrical Engineering (ICEE), Mashhad, 14-16 May 2013.pp.1-8 (2013)

29) Applying an Efficient Searching Algorithm for Intrusion Detection on Ubicom Network Processor

30) Qutaiba Ibrahim and Sahar Lazim Computer Engineering Department, University of Mosul, Iraq,2011

31) “IP3000/IP2000 Family Software Development Kit Reference Manual”, UBICOM, Inc., 28 June 2005, Web Site: http//www.ubicom.com.

32) A modern Approach to Artificial Intelligence by Staurt Russell and Peter Norvig., Second edition.

2017-9-18-1505708409

Efficient method of detecting edges in an image with a lot of objects

PART A

ABSTRACT

This report describes an efficient method of detecting edges in an image with a lot of objects. It outlines the effect of noise of different variances on the image and develops noise removal methodology to restore the image to its original form. It examines the result of edge detection on the original image and the image where noise has been removed. All the algorithms and results obtained in this report are developed and tested using on MATLAB.

INTRODUCTION

Edge Detection is integral to image processing in many facets. The task assigned is critically based on analysing the edges detected in an orange image before and after noise is added to the image. The edge detected is observed with different threshold values and an optimal threshold is determined. It also involves inputting a Gaussian noise with variances of 0.026 and 0.1 to the image and further devising a noise removal technique to act on the image restoring it back to its original form. Further investigation is carried out to determine if the detected edge of the cleaned signal can be improved on.

Figure 1. Block diagram showing an overview of the processes in this report.

EXPERIMENTS AND METHODOLOGY

The experiment conducted throughout this report has been conducted on MATLAB. And various tasks given have been highlighted and the methodology implemented has been outlined below.

Loading image into MATLAB and transforming image to grayscale form to manipulate the image and hasten period of implementation.

To load the image in MATLAB, the image is read from a specific directory. The image on which work is carried out is Oranges. The image is converted to grayscale to hasten time of computation when operating in MATLAB. Converting an image to grayscale transforms it from a 3-dimensional to a 2-dimensional image. Figure 2 depicts the transformation of the image from RGB to grayscale.

Figure 2. Original Image assigned and its Grayscale Transformation.

Application of edge detection algorithm to the image. Observation of the resulting edge with different threshold values and determine the optimal threshold for the edge detection.

In application of edge detection algorithm, from Figure 12., various algorithm such as Sobel, Prewitt, Roberts, including Zero Crossing were varied but the best edges were observed from the Canny Edge Detection algorithm.

In justification of the algorithm used, from observation on several images as well as research, Canny edge detection algorithm works appropriately where there are many objects in the image as this was carried out on several images before the decision to use canny algorithm was made.

Canny edge detection algorithm can identify weak edges and the image worked on has the tendency of providing weak edges due to the cluster of objects in the image.

The Canny edge detection algorithm examines partial maximum value for the gradient of the image. The derivative of the Gauss filter counts the image gradient. The Canny edge detection algorithm implements two thresholds to identify strong and weak edges. The weak edges are observed in the output if there is a connection with the strong edge. Canny operator basis is as shown below;

Gauss: g(x,y)=exp⁡[(-(x^2+y^2))/(2σ^2 )] (1)

Edge Normals: n_T= (∇(g*P))/(|∇(g*P)|)

(2)

Edge Strengths: G_n P=∂/(∂n_T )[g*P]

(3)

Maximal Strengths: 0= ∂/(∂n_T )* G_n P= ∂^2/(∂〖n_T〗^2 ) [g*P]

(4)

To determine the optimal threshold, the histogram was analysed due to more benefits. Gradient analysis was conducted on the image histogram. Histogram delivers a description of the appearance of the image. It is the spread of intensity values across the various pixels of the image. From Figure 11., because the image is a grayscale image, intensity values on the histogram ranges from 0 to 255.

From observation of the pattern of edges detected at several threshold values, the optimal Threshold resolved is 0.1.

Figure 3. Canny Edge Detection with Optimal Threshold of 0.1.

Addition of noise with given noise variances to the image and Analysis of the detected edge using the optimal threshold level.

The noise assigned is the Gaussian Noise with variances 0.026 and 0.1. Gaussian noise is defined by summing each image pixel a value from a zero-mean Gaussian distribution.

Gaussian noise is a common noise which is steadily distributed over the signal. This implies that for a pixel in the noisy image, it is the totality of the actual pixel value and a random Gaussian distributed noise value. Gaussian noise possesses a Gaussian distribution and from Figure 4. has a bell-shaped probability distribution function given by,

F(g)= 1/√(2πσ^2 ) e^((-〖(g-m)〗^2)/(2σ^2 )) (5)

g = gray level,

m = mean or average of the function

σ= standard deviation of the noise.

σ^2 = variance

Graphically, it is represented as shown below

.

Figure 4. Bell-shaped Probability Distribution of Gaussian Noise.

The Peak Signal to Noise Ratio (PSNR) is an important factor when describing the effect noise has on an image. From the values observed as inputted in the result, the PSNR of an image decreases as the variance of the noise increases.

The Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) of the noised image is calculated and it operates as a quantitative standpoint for comparison. The Peak Signal to Noise Ratio (PSNR) is usually utilized as a portion of excellence of reconstruction in image noise removal operation.

PSNR=10 log_10⁡(〖〖MAX〗_I〗^2/MSE) (6)

=20 log_10⁡(〖MAX〗_I/√MSE) (7)

=20 log_10⁡(〖MAX〗_I )-10 log_10⁡(MSE) (8)

〖MAX〗_I = maximum pixel value of the image.

MSE=1/mn ∑_(i=0)^(m-1)▒∑_(j=0)^(n-1)▒[I(i,j)-K(i,j)]^2 (9)

MSE = mean square error

I = input image

R = reconstructed image

m = number of pixel in vertical dimension of images I and R

n = number of pixel in horizontal dimension of images I and R

From Figure 6., the edges detected were noisy and not as defined as the edges detected on the image without noise. About the variances, the higher variance of 0.1 had greater noise than the variance of 0.026.

Figure 5. Noised Images with Variances of 0.026 and 0.1.

Figure 6. Canny Edge Detection of the Noised Image with optimal Threshold of 0.1.

Formulation of a noise removal methodology to improve the image and restore it to its initial form. Application of formulated noise removal technique to the image.

The zero-mean property of the zero-mean Gaussian distribution allows the Gaussian noise to be removed by balancing the pixel values. From observation on work on the image, the normal linear filters like the arithmetic mean filter and Gaussian filter cleans the noise nevertheless the edges ended up blurred. From research, to protect the totality of the edges and the information from the Gaussian noise, the wiener filter provides a better result. The wiener filter can be comprehended more effectively in the frequency domain. Computation of the Wiener filter involves a presumption that the signal and noise processes are second-order. With the noisy image, X_((m,n)), discrete Fourier transform is implemented to obtainX_((u,v)). The original image spectrum, S_((u,v)) is approximated by obtaining the product of X_((u,v)) with the Wiener FilterG_((u,v)).

〖Image Spectrum,S〗_((u,v))=G_((u,v) )*X_((u,v))

(10)

The Wiener Filter,G(u,v)=(H_((u,v))* 〖P_1〗_((u,v)))/( 〖〖|H|〗_((u,v))〗^2 *〖P_1〗_((u,v) )+〖P_2〗_((u,v)) ) (11)

H_((u,v)) = Fourier Transform of Point-spread Function (PSF)

〖P_1〗_((u,v)) = Power spectrum of the signal process, from the Fourier Transform of the signal autocorrelation.

〖P_2〗_((u,v))= Power spectrum of the noise process, from the Fourier Transform of the noise autocorrelation.

Dividing through by 〖P_1〗_((u,v))

G(u,v)=H_((u,v))/( 〖〖|H|〗_((u,v))〗^2+〖P_2〗_((u,v))/〖P_1〗_((u,v) ) ) (12)

¬¬ 〖P_2〗_((u,v))/〖P_1〗_((u,v) ) =1/SNR (13)

Figure 7. Denoised Images with Variances of 0.026 and 0.1.

Application of edge detection algorithm with the determined optimal threshold level to denoised image. Comparison of the results with the result of the noised image.

From Figure 8., the detected edges of the image after noise removal are not as distinct as the edges detected in figure 3.

Nevertheless, the edges detected are more defined than the edges detected in the Figure 6., because the wiener filter has acted on the noisy image and created better structured edges. The edges detected from the image with variance of 0.026 is better defined than that of variance of 0.1 because even after denoising the image, due to a higher variance, the pixels of the image are not well defined as that of variance of 0.026.

Figure 8. Canny Edge Detection of the Denoised Image with optimal Threshold of 0.1.

Investigation to determine if the detected edge of the cleaned signal can be improved by varying the threshold level.

By varying the threshold level, it is observed that the detection of edges improved by reducing the values of threshold but the edges did not exactly conform to the edges of the Original images.

Variance 0.026

Threshold 0.2 Threshold 0.07

Figure 9. Edge Detection of Image of Variance 0.026 with threshold of 0.2 and Threshold 0.07.

Variance 0.1

Threshold 0.2 Threshold 0.07

Figure 10. Edge Detection of Image of Variance 0.1 with threshold of 0.2 and Threshold 0.07.

From Figure 9., the greater threshold of 0.2 produced decent edges compared to image of threshold 0.07. Same outcome is observed from Figure 10., for denoised image having original variance of 0.1

RESULTS

Figure 11. Plot of Grayscale Image.

Figure 12. Result of Prewitt, Sobel, Robert Edge Detection on the Grayscale Image.

From Figure 12., the detected edges are not distinct because the weaker edges are not properly detected.

Figure 13. Flow Diagram of Typical Threshold Decision.

Figure 14. Plot of Noised Images

Variance (sq units) PSNR (dB)

0.026 16.9754

0.1 11.9643

Table 1. Table of PSNR for the different noise variances of the Noised Images.

Figure 15. Plot of Denoised Image

Variance (sq units) PSNR (dB)

0.026

0.1

Table 2. Table of PSNR for the different noise variances of the Denoised Images.

DISCUSSION

From Figure14., the plot which is the plot of the noised image depicts a structure different from the plot in Figure 11., because of the noise which has altered the image and caused an even distribution of intensity across the pixels.

The plot observed from Figure 15., which is the plot of the denoised image is a bit like the plot observed in Figure 11., the original image signifying that to some extent the image has been improved to portray the properties of its original form.

Table 1., shows the peak signal to noise ratio value of the noised image of different variances shows that the image produced a high value when compared to the values obtained in Table 2., which holds values for the denoised images signifying that noise has been reasonably reduced.

PART B

EDGE DETECTION

Edge detection is a method of processing images to discover the outer limits (edges) of objects in the images. It operates by identifying lack of continuity in brightness. Edge detection is used to segment image and extract data in diverse fields like image processing, computer vision, machine vision, etc. Edge detection is an important field in image processing. The common edge detection examples are canny, Sobel, Laplace, Roberts, Prewitt, etc. The four stages to edge detection include smoothing, enhancement, detection and localization. Smoothing reduces noise without extinguishing the true edges. Enhancement also referred to as sharpening adds a filter to improve the quality of limits (edges) in an image. Detection resolves which edge pixel should be rejected as noise and which should be accepted. (In most cases, thresholding supplies the criterion used for detection). And localization resolves the precise location of the limit (edge).

NEURAL NETWORK

Neural Network more appropriately referred to as artificial neural network (ANN) is a system made up of several simple, highly interconnected processing elements that transform information by their dynamic state response to external inputs. ANNs are developed to generate an output of a required program automatically. It is created from the basis of human neuron reactions. The capacity of a human to give immediate reflexes has been integral to the development of ANN.

Neural networks are naturally arranged in layers. Layers comprises of several interconnected nodes. Figure 7.1. depicts the basic structure of the ANN with several nodes. These nodes have an activation function. Patterns are processed to the network through the input layer, which communicates to one or more hidden layers where the actual processing is carried out through a system of different specific connections.

Figure 16. Basic ANN Structure.

EDGE DETECTION USING AN ARTIFICIAL NEURAL NETWORK (ANN)

Artificial Neural Networks are known to be relevant in edge detection. There are several Artificial Neural Network models used for detecting edges. They include the convolutional neural network, cellular neural network, feed forward neural network, back propagation neural network, novel neural network, etc. The back propagation is the most commonly utilised technique of training the artificial neural network. Basically, the neural networks based edge detection involves initializing the weight, feeding the training sample, propagating the inputs, updating the inputs, propagation in the hidden layer, and output formation. The major implementation of the artificial neural network is the training to develop algorithm. The training examples have inputs and they are patterned to develop output. To detect edges in an image using artificial neural network, several techniques can be implemented. The input image is processed based on the training implemented for the network, the input is processed in the hidden layer involving several interchanging of information and an output is fed from the output layer. Figure 7.2 is an artificial neural network based edge detection structure having a 9 node input layer (g1, g2,…, g9), with a multiple node hidden layer and a cell output layer.

Figure 17. Layered Artificial Neural Network based Edge Detection Structure.

.

Figure 18. Qualitative Comparison of Results of Edge Detection.

ADVANTAGES OF ANN BASED EDGE DETECTION

Artificial Neural Network technology is better than the conventional edge detection techniques in various perspectives. These include;

Provision of reduced operation load and possesses more benefits for reducing the impact of noise in an image.

Improved adaptive learning capacity. This is achieved as neural network is trained to detect edges in grayscale images possessing unchanging contrast, it can be retrained to resolve images having changing contrast with small changes in lightening environments as neural networks have the capacity to change in real time its synaptic weight.

Capacity to detect edges of the images that are not encountered during the training (learning) phase. This is the generalization ability.

The nonlinear mapping ability. An artificial neuron can as well as being linear be nonlinear. Non–linearity is a significant quality, especially if the primary physical structure responsible for the input image generation is characteristically non-linear.

Artificial neural networks are advantageous because several inputs and several outputs can be implemented during the stage of training. For example, in conventional methods one pixel is processed at a time, whereas in artificial neural networks several pixels as inputs can be processed. This is called parallel organization.

Artificial neural networks are fault tolerant in nature as its performance reduces delicately under adverse conditions.

Another area begging improvement is the optimality criterion as the MSE is not always the best optimality criterion.

CONCLUSION AND FUTURE WORK

Effectively, a human observer can distinguish and define parts of an object by its outline. It can be achieved if it is accurately outlined and if it reflects the form of the object. For the vision system of a machine the task of recognition has been addressed with a similar procedure. This encouraged the development of several edge detection algorithms.

Edge detection generally reduces the quantity of data, filters unnecessary information and equally preserves significant structural qualities in an image. The gradient edge detection method through the maximum and minimum in the first derivative of an image detects the edges. And the Laplacian method looks for zero crossing in the second derivative of an image to locate edges. The

Robert, Sobel and Prewitt operators are gradient edge detection examples.

Edge detection is relevant in the reduction of the dimension of data, in the security of content information, in the inspection for lost portions and calculation of significant part dimension. Further relevance may include the recognition and authentication of electronic user interface display, detection of object and tracking objects.

An area that can be improved is the de-noising around the edges as the technique used did not provide excellent edges after noise removal as the Original Edge.

REFERENCES

Dahl, G., 2015. The Basics of Neural Networks. [Online]

Available at: http://pages.cs.wisc.edu/~bolo/shipyard/neural/local.html

[Accessed 10 March 2017].

Dass, A., Shial, R. & Gouda, B., 2012. Improvising MSN and PSNR for Finger-Print Image noised by GAUSSIAN and SALT & PEPPER. The International Journal of Multimedia & Its Applications (IJMA), 4(4), pp. 59-72.

Folorunsho, O. & Akinwale, A., 2010. A Cellular Neural Network- Based Model for Edge Detection. Journal of Information and Computing Science, V(1), pp. 03-10.

Gupta, J., Choudhary, K. & Singh, K., 2012. Image De-noising by Various Filters for Different Noise using MATLAB, s.l.: s.n.

Kale, U. & Deokar, S., 2014. An Edge Detection Method Using Back Propagation Neural Network. International Journal of Engineering Research and Applications, Issue 12, pp. 07-11.

Krenker, A. & Bešter, J., 2011. Artificial Neural Networks. 1st ed. Rijeka: Intech.

MathWorks, 2017. Edge Detection. [Online]

Available at: https://uk.mathworks.com/discovery/edge-detection.html

[Accessed 10 March 2017].

Trucco, E. & Verri, A., 1998. Introductory Techniques for 3-D Computer Vision. 2nd ed. New Jersey: Prentice Hall.

Vijaykumar, V., Vanathi, P. & Kanagasabapathy, P., 2010. Fast and Efficient Algorithm to Remove Gaussian Noise in Digital Images. International Journal of Computer Science , pp. 10-17.

Yadav, D. & Gupta , P., 2013. Implementing Edge Detection for Medical Diagnosis of a Bone in Matlab. International Conference on Computational Intelligence and Communication Networks, pp. 270-275.

APPENDIX

clc;

clear all;

close all;

% Reads the image from a specific directory

Oranges= imread(’C:\MATLAb\Oranges.jpg’);

% Converts the read image into grayscale

GrayscaleOranges=rgb2gray(Oranges);

%Displays the properties of the Image

whos GrayscaleOranges

a= imhist (GrayscaleOranges);

plot (a); %Displays a plot of the grayscale

% Displays the image that has been read and the converted image

figure, subplot(1,2,1), imshow (Oranges), title(’Original Image’,’FontSize’,22);

subplot(1,2,2), imshow (GrayscaleOranges), title(’Grayscale Image’,’FontSize’,22);

%specifies the canny edge detection method.

CannyEdge= edge(GrayscaleOranges,’canny’,0.1);

% Displays the Detected Edge

figure, imshow(CannyEdge), title (’Canny Edge Detection’, ’FontSize’, 22);

%specifies the prewitt, sobel, and roberts edge detection method.

PrewittEdge= edge(GrayscaleOranges,’prewitt’,0.1);

SobelEdge= edge(GrayscaleOranges,’sobel’,0.1);

RobertsEdge= edge(GrayscaleOranges,’roberts’,0.1);

% Displays the Detected Edges

figure, imshow(PrewittEdge), title (’Prewitt Edge Detection’, ’FontSize’, 22);

figure, imshow(SobelEdge), title (’Sobel Edge Detection’, ’FontSize’, 22);

figure, imshow(RobertsEdge), title (’Robert Edge Detection’, ’FontSize’, 22);

%specifies Noise with variance of 0.026

NoiseImage1= imnoise(GrayscaleOranges,’gaussian’,0,0.026);

%specifies Noise with variance of 0.1

NoiseImage2= imnoise(GrayscaleOranges,’gaussian’,0,0.1);

% Displays the noisy image

figure, subplot (1,2,1), imshow (NoiseImage1), title (’Noise Variance 0.026’, ’FontSize’, 22);

subplot (1,2,2), imshow (NoiseImage2), title (’Noise Variance 0.1’, ’FontSize’, 22);

%specifies the histogram

b1= imhist (NoiseImage1);

b2= imhist (NoiseImage2);

%plots the histogram

plot(b1,b2)

%specifies the Noisy edge detection

NoisyEdge1= edge(NoiseImage1,’canny’,0.1);

NoisyEdge2= edge(NoiseImage2,’canny’,0.1);

% Displays the detected edge

figure, subplot (1,2,1), imshow (NoisyEdge1), title (’Edge Detection for 0.026’, ’FontSize’, 22);

subplot (1,2,2), imshow (NoisyEdge2), title (’Edge Detection for 0.1’, ’FontSize’, 22);

%specifies the peak signal noise ratio

[peaksnr1, snr1] = psnr(NoiseImage1, GrayscaleOranges);

[peaksnr2, snr2] = psnr(NoiseImage2, GrayscaleOranges);

%displays the calculated value

fprintf(’\n The Peak-SNR value is %0.4f’, peaksnr1);

fprintf(’\n The SNR value is %0.4f \n’, snr1);

fprintf(’\n The Peak-SNR value is %0.4f’, peaksnr2);

fprintf(’\n The SNR value is %0.4f \n’, snr2);

% Denoises the noised image

DenoisedImage1= wiener2 (NoiseImage1,[9 9]);

DenoisedImage2= wiener2 (NoiseImage2, [9 9]);

% Displays the denoised image

figure, subplot (1,2,1), imshow (DenoisedImage1), title (’Denoised Image for 0.026’, ’FontSize’, 22)

subplot (1,2,2), imshow (DenoisedImage2), title (’Denoised Image for 0.1’, ’FontSize’, 22)

%specifies the histogram

c1= imhist (DenoisedImage1);

c2= imhist (DenoisedImage2);

%plots the histogram

plot (c1,c2)

%specifies the detected edge

DenoisyEdge1= edge(DenoisedImage1,’canny’,0.1);

DenoisyEdge2= edge(DenoisedImage2,’canny’,0.1);

%Displays the detected edge

figure, subplot(1,2,1), imshow (DenoisyEdge1), title (’Edge Detection of Denoised Image of Variance 0.026’, ’FontSize’, 22);

subplot(1,2,2), imshow (DenoisyEdge2), title (’Edge Detection of Denoised Image of Variance 0.1’, ’FontSize’, 22);

%specifies the peak signal noise ratio

[peaksnr3, snr3] = psnr(DenoisedImage1, GrayscaleOranges);

[peaksnr4, snr4] = psnr(DenoisedImage2, GrayscaleOranges);

%displays the calculated value

fprintf(’\n The Peak-SNR value is %0.4f’, peaksnr3);

fprintf(’\n The SNR value is %0.4f \n’, snr3);

fprintf(’\n The Peak-SNR value is %0.4f’, peaksnr4);

fprintf(’\n The SNR value is %0.4f \n’, snr4);

%specifies the edges of image1 for improvement with threshold

DenoisyEdgeX1= edge(DenoisedImage1,’canny’,0.2);

DenoisyEdgeX2= edge(DenoisedImage1,’canny’,0.07);

%display the images

imshow (DenoisyEdgeX1)

imshow (DenoisyEdgeX2)

%specifies the edges of image2 for improvement with threshold

DenoisyEdgeY1= edge(DenoisedImage2,’canny’,0.2);

DenoisyEdgeY2= edge(DenoisedImage2,’canny’,0.07);

%display the images

imshow (DenoisyEdgeY1)

imshow (DenoisyEdgeY2)

LIST OF FIGURES

Figure 1. Block diagram showing an overview of the processes in this report. 3

Figure 2. Original Image assigned and its Grayscale Transformation. 3

Figure 3. Canny Edge Detection with Optimal Threshold of 0.1. 4

Figure 4. Bell-shaped Probability Distribution of Gaussian Noise. 5

Figure 5. Noised Images with Variances of 0.026 and 0.1. 5

Figure 6. Canny Edge Detection of the Noised Image with optimal Threshold of 0.1. 6

Figure 7. Denoised Images with Variances of 0.026 and 0.1. 6

Figure 8. Canny Edge Detection of the Denoised Image with optimal Threshold of 0.1. 7

Figure 9. Edge Detection of Image of Variance 0.026 with threshold of 0.2 and Threshold 0.07. 7

Figure 10. Edge Detection of Image of Variance 0.1 with threshold of 0.2 and Threshold 0.07. 7

Figure 11. Plot of Grayscale Image. 8

Figure 12. Result of Prewitt, Sobel, Robert Edge Detection on the Grayscale Image. 8

Figure 13. Flow Diagram of Typical Threshold Decision. 8

Figure 14. Plot of Noised Images 9

Figure 15. Plot of Denoised Image 9

Figure 16. Basic ANN Structure. 11

Figure 17. Layered Artificial Neural Network based Edge Detection Structure. 11

Figure 18. Qualitative Comparison of Results of Edge Detection. 12

LIST OF TABLES

Table 1. Table of PSNR for the different noise variances of the Noised Images. 9

Table 2. Table of PSNR for the different noise variances of the Denoised Images. 9

PART A

2017-3-16-1489704528

Theories underpinning Cognitive Behavioural Therapy (CBT): essay help site:edu

This paper looks to explore and critically review the main theories which include behavioural and cognitive theories that underpin Cognitive Behavioural Therapy (CBT), their influence in the development and evolvement of CBT and how they have expanded to inform assessments for children and young people.

The Behaviourist movement began in 1913 with John Watson challenging the introspections approach and promoting the study of observable behaviour/actions. His manifesto ‘Psychology as the Behaviourist Views It’ introduced a number of principles regarding methodology and behavioural analysis; All behaviour is learned from the environment, psychology should be seen as a science with theories supported by empirical data collated through controlled observation and measurement of behaviour as well as to measure and study observable behaviour, as opposed to internal processes such as thinking and emotion. It states there is little difference between learning that takes place in humans and that in other animals and that behaviour is the result of stimulus-response. This Methodological behaviourist approach to learning is based on the notion that the mind is tabula rasa (a blank slate) at birth and learning evolves through nurture and conditioning. (McLeod, 2017).

It was Pavlovs experiments of dogs conditioned reflexes in 1927 that introduced the theory of classical conditioning. Behavioural experiments were conducted observing and measuring dogs unconditioned and conditioned responses to either conditioned or unconditioned stimuli. Learning was demonstrated through the use of conditioning, the formation of connections or associations between stimuli, suggesting that response is learned and repeated through immediate association whether this is a desirable or undesirable response (Gross, 2009) In relation to human behaviour, the premise is that all behaviour is learned and maladaptive learning can take place leading to ‘abnormal behaviour’ and phobias. Thus, if it can be learnt it can be unlearnt. In particular specific anxiety phobias can be linked with conditioning behaviours. (William & Darity 2008).

Mowrer (1939), building on the writings of Pavlov, stated that ‘anxiety is therefore a learned response, occurring to signals (conditioned stimuli) that are premonitory of (i.e. have in the past been followed by) situations of injury or pain (unconditioned stimuli)’.

It was Watson who was the first psychologist to apply the principles of classical conditioning to human behaviour. This was done through The Little Albert experiment (Watson and Rayner 1920), where a conditioned fear response of rats was developed within a young child. This experiment demonstrated the conditioning of fear acquisition.

Following on form these theories, it was Wolpe (1958) who introduced Systematic Desensitisation, based on the principle of counter-conditioning and reciprocal inhibition. ‘If a response inhibitory of anxiety can be made to occur in the presence of anxiety-provoking stimuli it will weaken the bond between these stimuli and the anxiety’ (Wolpe 1969: Gross P.820). Thus being in a relaxed state when exposed to or imagining the feared object/situation coupled with the implementation of habituation will begin to extinguish the conditioned response. This treatment is particularly effective for specific phobias. However, it may be limited if some patients have difficulty in transitioning from using imagery to real-life. Wilson and Davison (1971) debate that relaxation might just be a useful way of encouraging the person to confront their fears and also Marks (1973), states that it is the exposure to the feared situation that is the most effective form of treatment and learning rather than the relaxation apsect. The theory also builds on Mowrer, (1939) that fear can form motivating properties and that reducing fears can reinforce behaviours such as avoidance and these can build in strength. From a practical point of view this theory underlies the thinking associated with those forms of behaviour therapy that deal with fear reduction by breaking the links between stimulus and undesired response i.e. desentisation and flooding (Wolpe, 1958, Eysenck & Rachman, 1965; Rimm & Masters, 1974).

Rachman (1976) theory of Fear Acquisition, ‘assumes that fears are acquired and that the process of acquisition is a form of conditioning’. Fears can be developed via three routes; conditioning, vicarious exposures and by information and instruction. In contrast to classical conditioning, fears can develop without direct contact with fear stimuli but through either vicarious exposure or information transmission. However, increasing the strength of a fear depends on the number of repetitions of the association between pain/fear experience and stimuli as well as the intensity of the fear that is experienced.

However, these approaches do not necessarily explore that most people will experience a range of fear-provoking situations but can remain fairly fearless. Also, it is worth considering that these theories support fears that arise in an acute manner, however, more difficult when the onset is uncertain Goorney and O’Connor (1971). Marks (1969), claims that fears that develop gradually (e.g. social fears) and cannot be related to a specific circumstance can be an issue for this approach.

Edward Thorndike’s (1905), Theory of Law of Effect led to the development of the next phase of Behaviourism providing the notion of operant behaviour. Thorndike (1898) studied learning in animals (ie. Puzzle boxes for cats to escape from) to provide empirical evidence to support his theory. The experiments led to his proposition that any behaviour that is followed by pleasant consequences is likely to be repeated and any behaviour followed by undesirable consequences is likely to cease (Operant Conditioning).

Burrhus Skinner (1936), building on Thorndike’s theory, began to make the distinction between respondent and operant behaviour. Skinner began to question the theory of classical conditioning, claiming that most animal and human behaviour is not triggered by specific stimuli, but more as result of how they operate in their environment and how this behaviour is key in determining certain consequences. He claims that the learner is far more active than Pavlov or Watson would recognise. (Gross, 2005 P.178).

Skinner (1948) studied operant conditioning by conducting experiments using rats or pigeons placed in a ‘Skinner Box’ (a form of puzzle box), to explore how operants (intentional actions) have an effect on the surrounding environment and identify the processes which made certain operant behaviours more or less likely to occur. Skinner concluded that operant conditioning uses reinforcement and punishment systematically to facilitate learning (William & Darity, 2008). If behaviour is not supported by reinforcement then it becomes extinct.

This was a move from the classical conditioning perspective, which just focused on antecedants and reflexes, by emphasising the influence of reinforcers (positive – rewards and negative – removal of negative stimuli) to increase and strengthen behaviour and punishment to reduce and weaken behaviour.

Skinner adapted and developed this theory to demonstrate the effectiveness, speed of learning and extinction of certain behaviours. Introducing the idea of behaviour shaping through successive approximation (a series of rewards that provide positive reinforcement for behaviour change). Bringing this theory into therapy, allows us to understand the key operants that are maintaining maladaptive behaviours e.g. most anxiety disorders are reinforced by negative reinforcement such as avoidance. This can be effectively applied, especially in children and young people where we are able to moderate and shape behaviour by using positive reinforcement (praise and rewards).

Although, Skinners theory can explain a range of behaviours and the process of learning, it fails to take into account inherited and cognitive factors in learning.

However, in contrast, Hull (1943) introduced Drive-Reduction Theory, which sought to demonstrate that behaviour and learning was developed through motivation and behaviour. For example an imbalance will create a need, which in turn creates motivation and drive, concluding that behaviour can be recognised as an attempt to reduce the drive and meet the need. Therefore, the association of stimulus and response in classical and operant conditioning only results in learning if accompanied by a drive reduction (Ed Jacqueline et al).

In 1963 Albert Bandura introduced a more sophisticated model of learning – Social Learning Theory.

Bandura’s theory explains that learning occurs in a social context and that children and adults learn through imitation and observation of consequences (Vicarious Learning). Behaviour that is rewarded is more readily imitated than behaviour that is punished, which was illustrated during his Bobo Doll experiment in 1961 (Ed. Jacqueline et al),. Children are surrounded by a various models i.e. parents, peers, teachers etc who provide examples of how to behave in certain situations and observe the consequences of this behaviour and begin to internalise this learning. A child will start to imitate behaviour if they believe it will be positively reinforced. Another factor influencing the level of imitation in behaviour is if the child perceives the ‘model’ as similar to themselves i.e. same gender.

However, it is important to bear in mind that positive or negative reinforcement will have little impact if the reinforcement offered externally does not match with an individuals needs. (SMcLeod, 2016).

Bandura Social Learning Theory presents four meditational processes: Attention which refers to the level of exposure to a behaviour and how much it is noticed, Retention, how well the behaviour is remembered as much of social learning is not immediate, Reproduction the ability to perform the behaviour and finally Motivation, the rewards and punishment that follow the behaviour. Unlike previous theories, Bandura’s Social Learning approach began to recognise the thought processes that occur when deciding whether or not to perform a particular behaviour.

McLeod (2016) offers some criticisms to Social Learning Theory as describing human behaviour as complex and that behaviour is more likely to occur through an interaction between nature and nurture rather than just solely on the environment. Also social learning theory cannot explain all of behaviour especially in the case of an absence of an apparent model in a persons life were they are able to imitate certain behaviours.

Overall there are strengths in behaviourist approaches as they provide empirically based evidence of learnt behaviours via stimulus and can be related into therapeutic practice in particular addressing behavioural difficulties and modification, specific phobias, OCD and other anxiety/fear based disorders.

However, behaviourism only really provides a partial explanation of human behaviour.

Although providing a science based viewpoint, comparing animal to human behaviour is limiting, we cannot apply the basic principles of learning equally to all species (Seligman, 1970). Weiskrantz, (1982) stated that classically conditioned responses in humans extinguish more rapidly because they are modulated by more complex human memories. (References on Gross P.181). Thus behaviour is measured objectively and therefore does not consider the impact of the complexity of cognitions or emotions.

Carl Rogers from a Humanists viewpoint claims that the scientific experiments used to control variables create an artificial environment and has low ecological validity. Also need to consider free will in decision making (McLeod, 2016)

Freud’s psychodynamic approach adds that ‘behaviourism does not take into account the unconscious mind and its influence on behaviour and that people are born with instincts rather than a blank slate’. It is also important to consider he role of nature and biology in influencing behaviour for example, fluctuations in hormones (McLeod, 2016).

Furthermore, recognising that Social Learning Theory did not adequately account for how we develop a whole range of behaviours including thoughts and feelings, knowledge, concepts and abstract rules. In response to this Bandura (1986) to develop Social Cognitive Therapy, which incorporated these factors.

However, it was Bandura’s Social Learning Theory in 1963 which formed the bridge from a behaviouralist approach to learning to a more cognitive one.

Cognitive Theory

Cognitive Psychology grew in popularity during the mid-1950’s, due to the dissatisfaction of the behaviourist approach focusing on external behaviour rather than internal information processing.

“Outcome studies of Behavioural Therapy showed considerable effectiveness in the treatment of phobias and obsessive compulsive disorders, however, this therapy became too limited in its framework and range of problems for which it was effective”. (Rachman, 1977: Oxford P.3)

For example Grant et al (2007) states that the role of cognitive factors of behavioural change e.g. covert behaviour such as obsessional thought or observational learning could not be directly addressed by behavioural methods alone.

Cognitive learning is defined as learning that is concerned with acquiring problem-solving abilities through conscious thought (Longe, 2016). Thus, Cognitive Theory looks at how an individuals intelligence and acquisition of information from their environment affect their behaviour.

Albert Ellis’s publication of Reason and Emotion in Psychotherapy in 1962 gave the emphasis on the primacy of cognition. Ellis asserted the notion that people are disturbed not by events themselves but by their perception of them. Through his Rational Emotive Behaviour Therapy (REBT), Ellis claimed that irrational thoughts are the main cause of all types of emotional distress and behaviour disorders. (Gross, 2005 P.826). According to Ellis the aim is to replace these irrational beliefs with more reasonable ones.

Meichenbaum in 1977 built on this theory, by explaining that neurotic behaviour is due to ‘faulty internal dialogues’ and therefore, by training to self-instruct successfully when experiencing these challenging thoughts would reduce the behaviour. Wolpe (1978) argued that this particular Self-Instructional technique isn’t as effective for severe anxiety, because many neurotic fears are triggered by objects and situations which the patient understands are harmless and that the fear is irrational (Gross, 2005 P. 826).

Beck in 1976 introduced the Cognitive Model of Emotional Disorders, which proposes that distorted or dysfunctional thinking is common to all psychological disturbances. However, it is widely agreed that Becks publication of Cognitive Therapy for Depression in 1979 was the most influential cognitive models that fuelled the revolution. Beck claimed that depressed individuals feel the way they do because their thinking is dominated by negative schemas and these are fuelled by certain cognitive biases, which cause the person to mis-perceive reality (Gross, 2005). P.826.

However, Champion and Power, 1995 criticise Becks model as they believe it underemphasises social factors. Gross (2005) p. 827 states that Freud also perceived peoples feelings as being the dominant part of the ego and affecting our thoughts, whereas Becks considers that it is our thoughts that effect our feelings. Gross (2005) also explains that there is a challenge to prove that the thoughts are the cause of depression because also manipulating people’s emotions can also change their thinking.

Beck, however, recognised the value of behaviour therapy’s emphasis on scientific method, empirical research and verifiable evidence (Bennett-Levy et al, 2004) as well as the current maintaining factors rather than past causes, understanding that behaviour change is also a means of cognitive and affective change.

It is here that we began to see the emergence of CBT. Westbrook et al 2011, describes CBT as the formation of several principles; collaborative empiricism, the combination of both behaviourism and cognitive theory, the Here and Now principle as well as recognising the interacting systems that occur between the environment, thoughts, feelings, behaviour and physiology.

Due to the scientific and robust evidence base that both behavioural and cognitive theories provide, Cognitive Behavioural Therapy is recommended by NICE (National Institute of Health and Care Excellence) as the first line of intervention for Depression and Anxiety for children and young people.

However, here is now an emerging new wave of CBT such as Mindfulness, Acceptance and Commitment Therapy and Dialectical Behaviour Therapy, which build on Behaviour and Cognitive approaches but with an emphasis on emotions and increasing the acceptance of an individuals internal experiences rather than going against them (Hayes, 2004). Although these interventions are still in their infancy, DBT and Mindfulness already have a strong evidence base for their clinical effectiveness, however, their application to mental health problems is still in the early stages of investigation. (Grantham & Cowtan, 2015).

CBT Assessment

How do such theories then inform CBT Assessments and which theories and models provide the most robust way of assessing a persons problems in a CBT focused way.

CBT assessment will take various forms in order to identify behaviour, cognition, emotion and physiology about an individual and their particular response system e.g. interviews, observations and standardised measures.

Kirk, 1989 (Paul Grant, Paula et al), therefore states that ‘assessing a problem with multiple techniques produces a more comprehensive identification of the problem and gives the therapist a better picture of how well the treatment can address the problem’.

Thus, the aim of assessment according to Westbrook et al (2011) is to arrive at a formulation, by repeatedly building and testing hypotheses and what process might be important to the formulation. With the purpose of encouraging change through goal setting.

Within the CYP IAPT Principles as set out in the ‘Delivering Well and Delivery Well’ document (CYPIAPT In CAMHS Values and Standards, 2014), it is important that CBT assessment promote collaborative practice, joint decision-making, the use of Routine Outcome Measures and evidence-based practice.

Initially within any assessment we would be gathering information from the client in order to then analyse the problem using various cognitive and behavioural tools. CBT models can often provide a form of psycho-education and ways in which an individual can begin to develop awareness and understanding of their internal processes.

The use of Routine Outcome Measures, in assessment help to improve clinical practice and outcomes.

The Parent and Child Revised Child Anxiety Depression Scale (RCADS) are standardised scales which are evidence based (Chorpita et al, 2000). Informed primarily from cognitive and behavioural theories their function is to collate information on presenting symptoms and thoughts, feelings, behaviours and physiology and measures the frequency of these to help identify anxiety disorders and low-mood. Miller and Duncan (2000) developed a brief Outcome Rating Scale to illicit feedback on functioning, again a tool which creates an opportunity to explore further negative cognitions or behaviours that are impacting on the young persons day to day functioning.

At the information gathering stage we would be focused on ascertaining a range of variables that affect the individual; situational, behavioural, affective, physiological, cognitive, social/interpersonal, the consequence and impact of the behaviour, possible coping strategies, maintaining processes and behaviours. It is important to also assess vulnerability factors, levels of risk, precipitating and modifying factors, frequency of the problem, intensity and severity, number of times the problem occurs and duration. In addition discuss confidentiality and consent with the young person (Westbrook et al, 2011).

CBT assessment draws on various models and frameworks provided by behavioural and cognitive theories as useful tools to help navigate through the assessment process in order to gather the key information on these intricate variables to help inform formulation.

Carr 2006 offers a useful longitudinal assessment and formulation tool (5P’s) in order to begin to gather relevant information; Predisposing factors which are any factors that have contributed to the persons vulnerability to their current problem, Precipitating factors which cause the onset of the problem; Presenting factors a description of the persons current difficulties including risk assessment, Perpetuating factors which maintain the current difficulties and Protective factors which prevent or lessen a particular behaviour or the problem e.g. the clients resilience and family network.

This is a framework which provides the opportunity to integrate both behavioural and cognitive assessment for example, during the exploration of precipitating factors, pre-existing beliefs may present themselves and within Presenting factors, you are able to define thoughts, feelings, behaviours and physiology as well as identifying any maintaining cognitions or behaviours.

Skinner used the term Behaviour Analysis in order to focus on identifying target behaviours, thus it is important that we are able recognise the cause or the function of the unhelpful behaviour.

One tool that evolved from this theory and can be applied to behaviour analysis is the Functional Analysis model for examining the relationship between behaviour and the environment. It is a scientific approach and information can be gathered from a variety of sources via behavioural observations, behavioural measures (i.e. Becks Depression Inventory, 1996) and self-monitoring activity diaries. Functional Analysis is not a ‘method’, but it is one possible product of the application of behavioural assessment (Haynes & O’Brien 1990). Functional Analysis is also termed ‘A-B-C Analysis’ as it aims to identify three main components; Antecedents, Behaviour and Consequences and forms hypotheses of their inter-relationships (Yoman, 2008). Skinner claimed that all behaviour can be broken down into these three components to identify the function that the problem behaviour serves.

Questions that would be used in order to identify these three components would be for example: What are the triggers to the problem, what physical symptoms are present? What does the person do e.g. avoidance, safety behaviours, escape? What happens afterwards? What is the impact and consequence of this behaviour? What makes the problem better or worse?

However, using this as stand alone tool in assessment, is limited as it focuses on the behaviour and does not allow the therapist and client to explore cognitions. Thus integrating Ellis (1957) ABC Technique of Irrational Beliefs three stage model (Activating Event, Beliefs and Consequences of the negative beliefs) would help provide a much richer picture of the clients problems and maintaining beliefs and behaviours. (McLeod 2015).

2018-4-9-1523300337

How to complement an Anti-virus in order to protect an SME sufficiently?

Abstract

A network can be prone to cyber-attacks, and with the advancement of information technology an attack can happen at any time, to withstand these attacks security policy, security frameworks and tools are developed and utilised within a network. It is common for an SME, as compared to a multinational corporation’s network infrastructure, to lack in network security. It is common for an SME’s not to have a large budget for cybersecurity and cannot afford high-end network security framework in their system, which makes them more vulnerable and at risk in the hands of malicious parties. Nevertheless, hackers are becoming more skilled at using advanced technologies, such as artificial intelligence to launch attacks. Regulations, such as GDPR, has placed pressure on organisations to deploy strict measures to ensure data is kept fully protected. This publication identifies common cyber attacks on SMEs, identifies some solutions used to mitigate the attacks, and recommends other measures that can further enhance the security posture of the enterprises.

1 Introduction

A security threat is a weakness of a system that has the potential to be exploited by an attacker. An identified risk can lead to a cyber-attack on the organisation causing severe damage to an organisation, mainly clients or users can lose confidence and reputation to a company that has been hacked and shown a weakness in their security of data. Harshitha and Rameshi (2013) note that cyber attacks have become a curse to technology since malicious users have been able to illegally access and destroy critical system resources while restricting other authorised users from accessing the system.

At the same time, small and medium enterprises (SMEs) are in the firing line for data breaches. Rose (2018) report that 61 per cent of SMEs were hit by cyber attacks in one year. Hackers often see SMEs as soft targets since they lack cybersecurity expertise and awareness, as well as lack of resources such as time and money to research, design, and deploy reliable security tools. McGoogan (2017) further reveals that the average cost of an attack is over £1,500, which does not include additional indirect costs from reputational damage, cost of notifying customers, and penalties for violating regulations such as GDPR. Unfortunately, most SMEs only deploy antivirus security programs, but this research shows that relying on this measure alone cannot only play a partial role in overall protection strategy, and other advanced measures should be considered for a holistic security approach in the small and medium enterprises.

2 Common Cyberattacks in SMEs

2.1 Malware

Malware is one of the most challenging threats to an organisation, malware identified as malicious code that is going to affect a device in a harmful way. Cybercriminals can avoid security measures without notifying the user through end-point security mechanisms such as anti-virus software, a firewall, intrusion prevention system; these frameworks analyse signature or behaviour of the executable to identify its legitimacy and identification of these security approaches commonly based on predefined or identified patterns and signatures. Attackers can bypass these security measures by developing malware with a unique pattern or signature, which cannot be identified by these security measures. Commonly malware performs actions such as keylogging, sending confidential information out of the network, performing actions with the machine and monitoring use activities such as browsing.

2.2 Phishing

Cybercriminals use phishing as a technique reveal information or install malware and generally used as part of a subsequent attack, phishing attacks also used as part of a social engineering attack such as credential theft. Phishing attacks conducted in a way that the user or victim trusts the source. An email from a known person, bank, business partner or co-worker. A download of an attachment or click to a link that can install malware on the computer or redirect the user to a cloned site that can steal credentials. Statistics from the Federation of Small Businesses show that 49% of SMEs have been victims of phishing attacks between 2014 and 2015 (Smith, 2016).

2.3 Sniffing

Sniffing or eavesdropping is a type of attack that an attacker sniff or listen in to communication steam going through a network. Sniffing data on a network is one of the critical security issues for SME’s are most SME’s use wireless routers or wired for internet connectivity. Lacking encryption in communication enables attackers to read the data that has been transmitted in data paths by merely sniffing the network. Attackers can use tools like Wireshark to perform this kind of attack. Users of a network who is remote accessing devices with credentials are at risk of having that data sniffed.

2.4 Password guessing attacks

Typical networks configured with remote management capability in the network devices; it adds easy to access the physical devices, easy to upgrade, troubleshoot and brings availability because there is no need to be at the location physically. Although this is less time-consuming and seems easy to manage the network, password attacks can be carried utilising different methods the most common is a brute force attack. This attack is carried out by attempting to log on to a device or system multiple times. These repeated attempts are based on a pre-built directory to guess the credentials of the target system. Attackers can carry out brute force attacks to gain access to the devices by using tools like Hydra. Getting access to network devices like switch router and server can lead to being able to modify network configuration files, routing tables, delete critical data.

2.5 Dos and DDOS

DDOS attacks are also known as Distributed denial of service attacks misuse the operational behaviour of computer network protocols, most commonly ICMP protocol and TCP protocol, attacks like TCP-Syn or half-open connections based on the concept of “Three-way-handshake” in TCP/IP networking. Three-way-handshake is a connection of a TCP based, attackers send numerous amounts of TCP-Syn packets to the target machine using multiple systems. Large amounts of SYN packets result to open uncompleted half connections to the target machine. His is similar ICMP protocol attackers send a large amount of ICMP echo-reply messages to the victim machine this can cause the system to handle the large amount of network traffic which can utilise more CPU, RAM and disk space, which can lead to a final result to run out of resources by shutting down the system. In most cases, DDOS has zero or minimum data loss. Nevertheless, the attack leads to interference of normal access of system and resources by end users since hackers flood company networks and servers with millions of requests to either interfere with the performance or shut down a system altogether (Rose, 2018).

2.6 Ransomware

In ransomware attacks, a malicious actor infects the target systems and hold critical information to ransom (Rose, 2018). The hacker demands some amount of money, mostly paid in bitcoins (Paquet-Clouston, Haslhofer, & Dupont, 2018). Ransomware attacks hit the headlines mostly in 2017 during the WannaCry attack that infected more than 200,000 computers in 150 regions and countries. In fact, according to Rose (2018), the attack almost brought NHS to a standstill. The attack gains access to a system through phishing emails containing malicious URLs, while others sneak into networks through other loopholes in software. In other cases, employees downloading and installing applications from unknown sources can lead to ransomware attacks.

2.6 Social Engineering

An SME can have an antivirus program and a sophisticated firewall in place, but such security tools will not prevent attacks launched through one of the weakest links in a cyber program, the people (Rose, 2018). It is imperative to note that in cybersecurity, technology is a small part since other attacks happen through social engineering where employees are manipulated by malicious actors looking for ways to penetrate a system. In this attack, the hacker collects information that can be used with other details to launch attacks. Smith (2016) quotes statistics from the Federation of Small Businesses showing that social engineering, such as baiting, cost the small business community more than £5 billion in one year. Baiting is a social engineering attack where a malicious hacker leaves malware-infected hardware, such as USB disks, where an unsuspecting target is likely to find it, plug it into a device connected to a company network, and spread the malware to the entire system (Airehrour, Nair, & Madanian, 2018). Further, the findings reveal that 66% of SMEs have fallen victims of social engineering attacks in the last two years.

3 Prevention Techniques Currently Used, and its Weaknesses

3.1 Antivirus

Antivirus software provides end security to a computer node in the network. Mainly it provides security for the data in a storage device. Antivirus software designed to identify malicious bit patterns which also known as signatures. Antivirus software companies utilise millions of libraries of signatures to identify malicious activity, e.g. Virus total. If any malicious activity is identified, functions such as accessing system files and running on the system will be blocked and prevented.

However, according to Korolov (2018), the traditional signature-based antivirus program that is widely used in SMEs due to its cost and ease of deployment, is poor at detecting and mitigating newly discovered threats, commonly referred to as zero-day exploits, as well as some ransomware. Currently, hackers are getting more skilled and are able to utilize innovative technologies such as machine learning to generate multiple versions of malware to avoid detecting by the signature-based tool. In addition, the latest file-less attacks cannot be easily detected by the legacy antivirus programs (Korolov, 2017).

In effect, SMEs should consider complementing the famous and traditional antivirus programs with newer and reliable security technologies. It is important to note that the security tool will not be replaced, instead, it will still be part of a multi-layered security protection strategy since it is still capable of mitigating thousands of common malware attacks, while leaving the advanced security measures to a smaller and comprehensive workload (Korolov, 2018).

4 Strict GDPR Regulation and The SME’s Cybersecurity Strategy

While reviewing and modelling the cybersecurity strategy for the SMEs, it is vital to understand the regulations and compliance aspect. In particular, small and medium businesses should understand the General Data Protection Regulation (GDPR) and its implications. Josh Eichorn, CTO Pagely, notes that this regulation gives European citizens more control to the security of their personally identifiable information (Eichorn, 2018). In this case, SMEs with websites, in reality, almost all businesses have websites, will be required to meet stringent compliance mandates to protect user data. Clearly, such requirements have an impact on the cybersecurity strategy. Before collecting such information, SMEs should ensure that they obtain consent from owners, and clarify how they intend to use it. Since the regulation requires increased data privacy and security, businesses will be required to tighten their cybersecurity strategy, which involves integrating reliable practices, other than the legacy antivirus programs.

Antivirus programs are important in preventing malware and other viruses, but in a hyper-connected world, the tool is inadequate in ensuring maximum security. In effect a multi-layered cybersecurity strategy that features an antivirus, firewalls, IDS/IPS, encryption solutions, and cybersecurity awareness training is vital for ensuring all data is private to avoid violating GDPR. SMEs should also assess and report security risks in case they occur.

5 Additional Possible Solutions to Enhance Security and Meet Compliance

The above section shows a few of the crucial threats and attacks that have been highlighted in the last few years. Also, the common security measure deployed by most SMEs has been reviewed. A problem has been identified in that the antivirus tool widely deployed by businesses is not entirely effective in protecting the systems form some forms of attacks. Therefore, additional measures are required for enhancing the security posture of an SME/ The section below will focus on the prevention approaches that can be configured and implemented in order to reduce the risk of these attacks.

5.1 Firewalls

A firewall is a network security device or software that monitors incoming and outgoing network traffic to determine if the data packets should be allowed to pass or it will be blocked based on a defined set of security rules. Firewalls are on the first line of defence in network security, and SMEs can deploy the tool as a barrier between the secured and controlled internal networks and untrusted and uncontrolled external networks, particularly the Internet. Notably, a firewall can be a hardware or software.

SMEs can deploy different firewalls such as:

Proxy firewall: serves as a gateway from an external to the internal network. The firewall is used to prevent a direct connection between secured and uncontrolled network.
A stateful inspection firewall controls the flow of traffic based on details such as state, protocol and port to filter content based on defined rules.
Application layer Firewalls – Monitor for any malicious data being transmitted between the hosts, this kind of firewall inspects traffic specific to an application or service.

As stateless and state-full firewalls focus is to secure systems at the network layer. Application layer attacks are increased compared to network layer this is due to state-full and stateless firewall implementations on a network are more robust than the application layer. Firewall policies and complexity of the modern network implementation bring up to a higher level, for example, Linux platform, iptables operate based on the three major components chain, rules and table. In Windows, a user can configure firewall rules based on app and services.

Rule – Component that defines what packets should analyse and tasks need to carry out with incoming and outgoing traffic.
Chain – Component that defines rules that combined into chains. There are three chains of Input, Output and Forward.
Table – Identity as chains that independently gathered into tables. Built-in tables are Filter, NAT and Mangle.

From three tables primarily import table is the Filter table and it is the default table among the three. Also, it is the default table for any defined rule. For inbound and outbound traffic filter table applies primary chains as given below: –

Input Chain – Incoming packets for the host cross through Input chain.
Output Chain – Outgoing packets are cross monitored through Output chain.
Forward Chain – Packets that are routed through the host for any destination is cross through the forward chain.

5.2 Intrusion Detection/Prevention Systems

Intrusion detection involves a process of monitoring events in an SME network and analysing them to discover anomalies such as incidents, potential threats, or violations. On the other hand, intrusion prevention is the succeeding activity where discovered anomalies are acted upon with the aim of mitigating attacks. Currently, there are cutting-edge intrusion detection and prevention systems that can be deployed in businesses. It has been mentioned that the present-day hacker is skilled and deploys sophisticated tools to launch attacks that can thwart legacy signature-based antivirus programs.

Figure 2: Intrusion detection and prevention solution (Juniper)

An intrusion detection and prevention system can be deployed to monitor a network to identify and mitigate possible incidents. An advanced IDS/IPS is a security information event management system (SIEM) used to log information, analyse, mitigate, and report anomalous activities. The underlying principles of the SIEM operation is that critical data about the SME is produced in different locations and store it in a single point of view to make it possible to detect trends and anomalies (Cotenescu, 2016). In other words, the tool is deployed for centralization and consolidation of security data in an organization to accurately respond to discovered threats and improve the risk compliance posture of the organization.

A signature-based IDS/IPS compares signatures against an observed event to detect possibility of incidents. An anomaly-based solution compares definition of the normal or regular system operations and activities with the actual real-time events to determine if there exist significant variances. This approach is reliable for detecting unknown threats.

5.3 Cybersecurity Awareness Programs

One critical aspect of cybersecurity strategy is the people. The common attacks reviewed in this report, such as social engineering and phishing, target employees of SMEs, who easily fall into the trap of the hacker. This is attributed to the lack of cyber security awareness. In this case, SMEs should develop training programs focused at educating workers on the tactics employed by hackers and ways they can mitigate them. Winkler (2017) recommends that a cybersecurity awareness training program should be supported by the SME management, focus on all crucial departments and personnel, feature relevant content, and assessed to determine success. Unfortunately, despite statistics indicating that trusted employees are among the weakest links in cybersecurity, few SMEs are investing in mitigating the insider threats. Aloul (2012) states that security awareness training is often overlooked in most information security programs. Instead, majority of businesses focus on expanding their reliance on advanced security technology, while ignoring the training required for the workers. In effect, attackers will continue to use this weakness to gain unauthorized access to systems. In effect, it is important to design and implement security training program to increase cybersecurity awareness among SME employees.

Conclusion

Considering the above analysis small and medium size network infrastructures cannot afford high-end security systems. Installing antivirus software on end workstation will ensure the system is not affected by known attacks. This will not provide essential security for an SME. Attacks like social engineering, password guessing credential reuse cannot be prevented using antivirus software. In the event of an attack, antivirus does not provide data security such as confidentiality and integrity. At the same time, the introduction of GDPR regulation requires more stringent measures to be applied in cybersecurity to avoid huge penalties for non-compliance. In effect, SMEs should design and deploy a multi-layered security strategy that ensures maximum data protection is achieved.

References

Harshitha, B., & Ramesh, N. (2013). A survey of different types of network security threats and its countermeasures. International Journal of Advanced Computational Engineering and Networking, 1(6), 28-31.
Rose, B. (2018). The biggest cyber threats facing SMEs in 2018. Fleximize. Retrieved from https://fleximize.com/articles/011275/cyber-threats-facing-smes
McGoogan, C. (2017). Cyber attacks hit half of UK businesses in 2016. The Telegraph. Retrieved from https://www.telegraph.co.uk/technology/2017/04/19/cyber-attacks-hit-half-uk-businesses-2016/
Paquet-Clouston, M., Haslhofer, B., Dupont, B. (2018). Ransomware payments in the bitcoin ecosystem. The 17th Annual Workshop on the Economics of Information Security (WEIS), Innsbruck, Austria. Retrieved from https://arxiv.org/pdf/1804.04080.pdf
Smith, M. (2016). Social engineers reveal why the biggest threat to your business could be you. The Guardian. Retrieved from https://www.theguardian.com/small-business-network/2016/oct/04/social-engineers-reveal-biggest-threat-business
Small businesses bearing the brunt of cybercrime. The Federation of Small Businesses. Retrieved from https://www.fsb.org.uk/resources-page/small-businesses-bearing-the-brunt-of-cyber-crime.html
Airehrour, D., Nair, N. V., & Madanian, S. (2018). Social engineering attacks and countermeasures in the New Zealand banking system: Advancing a user-reflective mitigation model. Information, 9(110), 1-18.
Korolov, M. (2018). Why the best antivirus software isn’t enough (and why you still need it). Computer world. Retrieved from https://www.computerworld.com.au/article/648872/why-best-antivirus-software-isn-t-enough-why-still-need-it/?fp=16&fpid=1
Korolov, M. (2017). What is a fileless attack? How hackers invade systems without installing software. CSO Online. Retrieved from https://www.csoonline.com/article/3227046/malware/what-is-a-fileless-attack-how-hackers-invade-systems-without-installing-software.html
Winkler, I. (2017). 7 elements of a successful security awareness program. CSO Online. Retrieved from https://www.csoonline.com/article/2133408/data-protection/network-security-the-7-elements-of-a-successful-security-awareness-program.html
Aloul, F. A. (2012). The need for effective information security awareness. Journal of Advances in Information Technology, 3(3), 176-183.
What is IDS and IPS? (n.d.). Juniper. Retrieved from https://www.juniper.net/us/en/products-services/what-is/ids-ips/
Cotenescu, V.M. (2016). SIEM (Security information and event management solutions) implementations in private or public clouds. “Mircea cel Batran” Naval Academy Scientific Bulletin, XIX(2), 397-400.

2018-11-23-1542935874

Shortage of skills in the construction industry: essay help site:edu

ABSTRACT:

Shortage of skills is a critical social problem which needs to be analyzed entirely in any organization. In this coursework we have discussed the challenges of demographic skills shortage of materials, plants and human resources, their potential influences and consequences on the construction industry and how we can cope with them in future. Moreover in near future the construction demand would increase and there will be critical demand for technical jobs in the construction industry. For that some of the skills and production methods are discussed in this coursework i.e. waste handling skills, Inspection and maintenance skills, concurrent engineering production strategy and use of the local resources which can somewhat help in reducing the skills gap in the construction sector.

Key Words: Skills, Construction, Production.

1. INTRODUCTION:

Skills are the essential abilities that can be expertly utilized in a particular context for a specific reason (NACI, 2003), and skills shortage occur where employers are unable to fill vacancies or having difficulties in filling vacancies for a specific occupation. Demographic and skills shortage is one of the major problems that the world is facing today. The main cause behind this shortage is either the artificial intelligence or machines which are taking place of workers. Across the world particularly Europe, America and Middle East the productivity level of skilled workers is very less (Hay’s Global, 2018). With eighty nine percent of the firms around the world are looking for skilled workers for their works, by 2035 the world will face serious shortage of skills if the methods of productivity to skills shortage is not changed. Like other countries United Kingdom is also effecting from demographic and skills shortage, in UK a survey was done by the department of the education’s employers skills (DEES) from 87000 employers and it was wired to see the result that the empty positions for technical jobs has become doubled since 2011 to 260,000. The reason behind this is that most of the employees are of over age above 55 years due to which there is less chance for the young talent to come forward and the old employees after retirement is taking their talent and skills with them without transferring to the young generation (IPD, 2015).

The skills shortage is not only high in the business sector; there is a lot of skills shortage on the technical side such as Engineering, Architecture and Construction. The potential effect of skills on the construction industry is the lack of training and education. In UK construction industry is the main pillar of the economy but from last one decade according to the Royal Institute of Chartered Surveyors there is serious shortage of both skills and man power in the industry. Investment in the construction industry is improving among the employers which they are willing to do investment in the construction industry but the skills scarcity acts as possible danger to continue expansion. Engineers and designers are the main foundation for any construction industry, unfortunately the skills gap in the construction industry cannot be filled by any technology or by any legislation. It can only be filled by the government proper strategies along with help of the industry. There are some capital and local initiatives on the construction industry skills and shortage in which vocational education and increased education and outreach programs are main focus, flexible training should be provided to the new talent through apprenticeship so that they can create a pathway for themselves in the industries (Toner, 2005).

2. DEMOGRAPHIC AND SKILLS SHORTAGE OF MATERIALS:

Material Production can be best described as “the planning and controlling of all things for making sure and confirming that the right and correct quantity and quality of materials and tools are as it should be specific in a timely manner and are got at a reasonable cost, and reachable when needed” (Safa, et al, 2014).Materials cost in any typical construction is from 50% to 60% of the project and if the shortage of skills to the management is poor it can affect 80% of the project schedule. Strong proof of good skills has shown that good handling of materials and equipments for construction needs reduces the cost of the whole project. Today in this modern world where technology can do everything, the skills shortage for materials handling, management, inspection and maintenance skills for the machines to operate, used for materials handling and production is getting poor because construction materials skills system and their factors require large improvement, creating vast demand for their enhancement and the development of new applications. In construction industry the demographic skills shortage of materials can be explained in terms of usage, waste, inspection and maintenance.

2.1 WASTE HANDELING OF MATERIALS:

Waste handling skill management is a major issue in the construction industry and in other industries of the world. In UK we have seen good skills for handling waste management but still improvement is needed. Experts says that whenever a design is being prepared for a construction project, there should also be some waste handling strategies to produce less waste of the materials at the construction site. These strategies of skills involve coordination of waste materials management such as proficient dumping of waste materials on sites; have a proper reuse of materials on sites where ever it is possible (B.K.Fishbein, 1998). These kinds of strategies and skills would provide a valuable opportunity to reduce waste of materials, and would increase the recovery of those materials that would be wasted and to polish our skills for waste handling management.

2.2 PLANNING AND CONTROLLING SKILLS SHORTAGE:

In construction industry we are facing a lot of skills shortage regarding planning and controlling of materials. This is because of not giving proper focus on education of planning skills to the workers working on the construction site. Planning and control of materials include materials takeoff, transportation and warehousing skills (Construction Industry Institute, CII). In case of improper planning and handling of the materials results in the false statistical reports related to the availability of materials and a shortage for funds and budgeting. The transportation skills and materials planning skills are important factors for construction materials planning because it can provide great help in reducing cost can get timely delivery of the materials for construction and can help in handling of dangerous materials content.

2.3 INSPECTION OF MATERIALS SKILLS:

In construction industry inspection of materials is a normal process, but with the passage of time we are seeing a gap regarding the skills related with the inspection of materials. At site of construction the inspection of materials is done through proper testing and after that it is allowed to be used in the construction. The reasons for this inspection of materials are to reject or accept output of materials and to improve the quality of the construction materials. In UK we have modern technologies for testing of materials and also the skills required for inspection of materials is modernized but in developing countries of the world we are seeing shortage of skills in this regard.

Fig; 1 Demographic and Skills shortage for Materials

2.4 POTENTIAL INFLUENCES AND CONCEQUENCES:

The Demographic skills and shortage of materials in terms of waste management, planning and controlling and inspection have very bad consequences on the environment and health. Through waste management skills shortage there is considerable potential for hazardous exposure like contamination of air, soil and water. Also poor planning and handling of materials has bad effect on the construction project because delivering of materials requires great skills and care. Late delivery of materials at wrong time may affect the schedule of the whole project. In planning skills administrative and financial process of payment for materials can create great hurdles (Sohrah, 2009). Shortage of inspection skills for materials in the construction projects is very dangerous because if the materials are not of the appropriate quality and it passes the inspection tests due to the skills shortage of the person it will be potential risk for the people and the construction project.

2.5 FUTURE IN CONSTRUCTION PRODUCTION:

New materials in construction industry are discovering which will change the production of materials, like grapheme which is not in use after its discovery but we can use it instead of steel and carbon fiber and also light in weight than both of these. Also the modern form of Roman concrete is discovered which is greener and tougher than its first form and also have more life span. In terms of waste the materials we get from construction of houses and demolishing are now recycled in new ways which are really safe and has good impact on the environment. According to the Environmental performance indicator (EPI) 19.2 cubic meter wastes per 100 square meters is obtained from these construction sites and in which more than half is recycled. For planning and handling of materials we have LPS (last planning system) which has been implemented to increase work flow on sites and workers can easily plan their strategies for materials delivery and its time schedules. The researchers offer three areas for future construction and production process like suggesting procedures for pointing practice technically highlight the problems with modern software applications and organizational improvements by applying last planning system.

3. PLANT AND EQUIPMENTS SKILLS SHORTAGE:

Plant in the construction refers to machinery, equipment or apparatus used for in the industrial production activities. For plant and equipments in any construction project production are substantially influenced by design decisions. Construction equipments are one of the very important resources of modern day construction, especially in infrastructure projects. Maintenance and equipment planning are the two currently skills shortage we are facing regarding plant and equipments for construction work.

3.1 MAINTAINENCE AND PLANNING SKILLS FOR PLANT:

Plant and equipments maintenance and planning are important skill we need in the construction industry to prevent any problems and to ensure that construction equipments are working effectively. Maintenance and planning skills involve a routine of general or immediate inspection of equipments which could lead to a number of risks if not properly maintained. If the risks regarding the equipments are less, it means less dangerous contact with the machine is required and it will have a better cost of better productivity benefits and efficiency. Wrong maintenance and planning of equipments has caused many fatal injuries during or after the construction projects has been done. In construction industry if we want to do safe construction we have to focus on maintenance and planning of the construction equipments.

3.2 POTENTIAL INFLUENCES AND CONSEQUENCES:

Construction equipments are constantly being developed to make every step of the construction projects easier, quicker, cheaper and safer. Construction plant and equipment range from small hand-held power tools to larger pieces of plant and equipment such as mechanical excavators and tower cranes and handling them in not proper way can make some serious impacts and consequences. The production can become less and the cost for the whole construction can become more. It also includes larger maintenance costs for the equipments.

3.3 FUTURE PLANT AND EQUIPMENT SKILLS:

Technology in this modern world is growing day by day and new inventions are taken place in every industry, also construction industry is going on numerous equipments are going under transformation. Increasingly, Building Information Modeling (BIM), and the development of Virtual Construction Models (VCM) are being used to organize construction works and the deployment of plant on site, in particular in relation to the use or cranes and other lifting equipment. Also 3D printing technology has been introduced which can make a house in less than 24 hours and can minimize the cost and other expenses.

4. DEMOGRAPHIC AND SKILLS SHORTAGE OF HUMAN RESOURCES:

The human productive resources necessary for the production in the construction industry are defined as human resources (Andrew Brown, 2019). The human resources here related to labor (Skilled craftsman, tradesman and professionals like engineers and architectures) and the method through which planning, management and recruitment of expert and managerial resources are required for construction is defined as human resource management. Human resources are fundamental to all industries, including the building industry. As claimed by Paul Manning, the chief officer of building company C. Raimondo and sons “maintain and wondering quality people is priority” (Tulacz, 2000). The skills shortage regarding to human resources has turn out to be trouble specifically in the construction industry in current years due to the fact there is a developing scarcity of qualified employees in that field. Levy similarly claims, “the scarcity of each professional trades-people and skilled managers will place extra emphasis on the need to increase the and extent of education in order to produce extra effective and productive workers” (Levy, 2000). The construction industry is heavily dependent on the adequate supply of a skilled labor force, and as a result the knowledgeable labour scarcity in the UK has obtained massive attention in recent years. With the current economic recovery, the enterprise is predicted to ride massive skills shortages in both traditional and new skills areas.

The changes in the procurement of the construction labour are the key thing in appreciating the skills situation on the construction production. The enterprise is often of very quick nature in which there is very less time for the operatives to know each other and to develop good teams. When labor is subcontracted, the direct employees of each subcontractor have more duration than a single project of construction to be aware of one another and to build fantastic teams. To cope with these degree of ineffectiveness in the construction industry approaches referred as ‘partnering’ and ‘framework agreements’ have been emerged. The main objective of these methods are that the workers should develop long term relationship with each other and share their experiences and professionalism of work with each other, so that each worker can gain some skills from each other.

4.1 HR SKILLS SHORTAGE IN UK CONSTRUCTION INDUSTRY:

UK construction industry is playing a basic role in the economy of the country; however the shortage of skilled human resources within the construction industry is becoming a challenge for this field. Now a day the construction industry of UK is trying hard to reach and cover areas for architectures and designers in field of civil engineering. In a report by the recruitment and employment confederation (REC), the human resources shortage in construction was defined as “Critical”. Although the number of vacancies in construction industry is rising but the number of skilled labors and professionals is hard to find. According to the construction industry training bound (CITB) more than 36,000 new workers will be needed to fulfill the demand of construction industry in UK. Royal institute of surveyors (RICS) says that 66% of the survey firms are closed because they did not have skilled human resources.

4.2 POTENTIAL INFLUENCES AND CONSEQUENCES:

The construction industry currently is facing number of problems in which human resources skills shortage is a major one. Human resource skills shortage has bad impression on every industry because it acts like oxygen for any industry. The influences and consequences of human resources skills shortage will be on the apprentices entering into the construction field because when the recession hit the UK construction industry the number of workers entering into this field was greatly affected because of having low investment on training sessions. Furthermore many organizations and training companies have a shortage of staff capable or accessible to train, educate and give lectures on human management skills in the construction industry (Skill Development Scotland, 2017). Also human resources skill gap is currently recognized as a current problem in the construction industry due to which the cost and productivity level of any project is increased.

4.3 HR SKILLS NEEDED FOR FUTURE CONSTRUCTION PRODUCTION:

In order to improve the skills of human resources regarding construction production, there is need to work in the following areas like technical knowledge and experience skills, Professional skills and qualities and digital skills;

4.3.1 TECHNICAL KNOWLEDGE AND EXXPERCIENCE SKILLS:

Technical knowledge and experience is generally specific to work and is developed by gaining experience and by doing specialized training. Technological and social changes inside construction industry have created growing need for human resources skills to advance in more interactive competencies skills required for specific role or work.

4.3.2 PROFESSIONAL SKILLS AND QUALITIES

Professional skills and qualities are non-role particular expertise such as problem solving, communication that are transferable between roles and industries and thus enlarge the demand of an employee. As competences like teamwork and communication are at high demand in any industry and person having skills like these tends to be more employable.

4.3.3 DIGITAL SKILLS:

Digital skills will help construction increases revenue, reduce cost and enlarge their productivity. An experience modification signifies an opportunity for construction sector to re-skill worker and contributes to the needs to discover an estimated one million new workers jobs of digital skills by 2024 (MACE, 2017).

Fig. 2 Source (Linzi Shearer, et al, 2018 p.19)

5. PRODUCTION STRATEGIES:

Production in the construction industry is viewed as form of “job” or unique and is characterized by product design, work force and mechanization. Production in the construction industry is reducing for over past 40 years. Today in the construction industry lean construction strategies is used to produce maximum amount of resources, minimize the waste of materials, time and cost. Lean production thinking argues that production consists of conversions and flows (Andrew Brown, 2019), in which conversion convert raw materials and flow process support the conversion. Some of the strategies are discussed which are used for the effective combination of the resources in the industry.

5.1 CONCURRENT ENGINEERING:

Concurrent engineering can be described as parallel execution of various resources by multidisciplinary teams with the goal of acquiring most beneficial produces regarding functionality, quality and productivity. Many enhancements can be executed through the use of concurrent engineering for future construction i.e. verbal exchange, sharing data and partnering with the subcontractors will be good for the construction industry for combining the resources in most effective way.

5.2 VISUAL INSPECTION:

Visual inspection suggests the uneven nature of building resources and leads to the application of visible tools for materials, work and statistic flow etc. Identification of correct resources can speed up the construction process and can combine the resources in accurate manner. Information and technology can also enhance the communication between selection markers and can speed up the system for future construction works.

5.3 USING OF LOCAL RESOURCES:

Local suppliers can be less expensive and supply specifications that in shape the nearby vernacular. Less transport will decrease charges and advantage the nearby neighborhood and environment. Reusing and recycling substances on site, and purchasing substances with a high recycled content, will have a comparable result, as well as saving on the aggregate Levy and landfill tax.

5.4 DAILY HUDDLE MEETTINGS:

Daily huddle meetings provide a platform for the workers to discuss their views about the production of resources and to share what has been achieved. The purpose of these meetings is to also discuss problems the workers are facing during the production, so that they can be solved and the productivity of the resources can be combined in most effective way in the industry.

6. CONCLUSION:

As an enterprise we need to take responsibility for our skills issues and collectively improve suitable solutions. We need to keep away from the threats of spiralling costs, eroding high-quality and increased accidents on site. Skills scarcity in the construction sector should not be viewed as a problem, however in a solution oriented industry we are about discovering solutions. In construction industry the training provision must be increased to attract and keep proper staff. The construction experts must commit continuing professional development if they want employers to make investments in their education and training. Employers in the construction industry should reward the employees with good opportunities in order to develop their skills and make a progress. If we want good production in the construction industry than skills are not enough, we have to view the productivity policies, do respect of workers, should balance the gender equality and provide effective social protection. These conclusions can help to improve the employability and skills shortages of the construction industry.

2019-3-28-1553785753

Importance of Enterprise resource planning in manufacturing companies: college admission essay help

Abstract

In our course throughout this semester, we learned about the importance of operations in several industries and how the management decisions related to the whole operational processes could affect the efficiency of the company. More particularly we learned how the operational decisions making are linked to all the functional departments of a company. From this perspective, we decided to learn more about a software that is able to link all the resources inside an enterprise and direct the operations of the firm toward the company strategy. That system was the Enterprise Resources Planning (ERP).

The aim of this paper is to describe, comment and analyze the importance of ERP in the manufacturing companies. We based our discussion on a case study retrieved from ScienceDirect.com that was conducted by Ignatio Madanhirea and Charles Mbohwa “Enterprise resource planning (ERP) in improving operational efficiency”.

This paper concentrates on the benefits of ERP in manufacturing firms and how the adoption of ERP will return positively on the manufacturing processes and on the profit of the company. The purpose of this analysis is to see if the ERP worth the cost when listing all the benefits that ERP offers to businesses. Our conclusion shows that ERP is essential to manufacturing companies for all the advantages it includes. Moreover, by studying the costs that can ERP software eliminate, and by having customized ERP software available in different pricing schemes that varies upon the size of the business and the operations executed, a company can indicate that implementing ERP worth the cost.

I. Introduction

Operations management as defined in all management books is the process of converting labor and materials into goods or services. According to an article published by The Business Development Bank of Canada (2019), they stated that operational efficiency is fully achieved once the whole route of operations that includes people and the work process is combined with technology. In order to make the company more profitable, the management should at first follow an assessment of the current efficiency at the business, second, cut costs and reduce waste, third managers should plan production process and last this should lead to increasing the output. Improving operational efficiency is a big challenge that requires hard work and good planning not to forget the effect of technological resources that make all the process more efficient. To achieve their business goals and have an optimal performance level, many organizations implemented a new technological system (Gartner, 2012). The integration of all information flow between each department in an organization will reflect positively on the overall performance and communication between all company members (Tallon PP, 2011; Goodhue, et al. 2009). The enterprise resource planning (ERP) is defined by Gartner as the method to control and plan all resources in an organization and the ability to deliver integrated information processes that serves all departments in this organization. ERP software is playing a big role in giving a competitive advantage to companies adopting it. By adopting an ERP software operational manufacturing processes are improved through optimizing inventory management, formulate the demand of customers, improve human resources, simplify and restructure the relationship between customers and suppliers, automate processes in order to save costs and improve employee productivity (Craze, 2017). Most manufacturing business costs fall into three categories: materials, labor, and overhead generated from the complex processes that manufacturing industry is facing in our today business environment.

Accordingly, to meet such a challenge of reducing cost, many manufacturing companies followed the new innovation of ERP solution that includes all aspects of the production process from initial quotations to invoicing. According to the business automation specialists USA (2013), ERP systems reduce manufacturing cost by 20% that implies improving operational efficiency.

Moreover, cost control, good communication, training, qualified labor, maintaining good quality, knowledge sharing and improving customer services improve operational efficiency. Firms adopt ERP system because this software takes into consideration most of the functional departments inside an organization since it covers all modules inside the organization such as production planning module, purchasing module, inventory control module, sales & marketing module, financial module, human resources (HR) module, Customer relationship module, and Supply Chain management module. Many companies or small businesses find it very costly and needs a lot of work to implement ERP software and train all employees in order to integrate all data flow in one system and be able to generate useful reports. Panorama consulting solutions USA is the world’s most trusted independent digital transformation and ERP systems experts, in its research conducted in 2017 to study the ERP implementation and satisfaction rate among industries, it shows that most companies implement an ERP system to increase business efficiency. And 93% of those companies enhanced their business processes to better operate.

Hence studying the ERP effect on operational efficiency is an important topic to conclude if implementing ERP software worth the cost.

II. Literature review

1. Enterprise resources planning (ERP)

Enterprise resources planning (ERP) as defined by Gartner is a database that gathers all information from functional departments inside an organization, integrates this information and delivers useful reports. To ensure the best performance level many organization implemented ERP systems that led to improving productivity, lower cost, and increase efficiency among all the functional departments (Nwankpa et al. 2015). The ERP system integrates data recorded from every operational department and makes it accessible to users of the system. This process will facilitate the transmission of information between departments to keep all members updated with the latest figures in order to make fast and accurate decisions regarding all processes inside the firm. Companies in every industry find many advantages in applying ERP systems from operational, managerial, strategic, technological and organizational aspects. Big companies invest big amounts in implementing ERP systems for the benefits and advantages these systems include. Even small to medium enterprises (SMEs) invest in such systems because of the globalization and the rapid growth of businesses. Although ERP systems organize the information and processes inside an organization, yet the challenge is in the user usage of ERP. Moreover, the ultimate objectives in adopting an ERP system is achieved when the end user accept and embrace the technology (Nwankpa.J, 2015). If the users find difficulties in using the system, they will refrain from using it or they will not record all the data on the system or access data in the system. The role of upper management is very important in this case. As much as the management provides employees with all necessary training and support to work on such systems, employees will accept these systems and will find it helpful to use for the benefits of their tasks that will result in developing their internal and external operational processes.

Besides, having all the advantages in implementing an ERP system in organizations, the cost of the software, the maintenance, and the customization requests remain the crucial financial decision that a company has to make after evaluating the cost versus the benefits. Even though the costs could be very high, the satisfaction of ERP adoption and the efficiency in the operational processes could be very beneficial in many companies.

2. Operational efficiency and manufacturing companies

According to a presentation published by Nestle Company in May 2016, operational efficiency is achieved through driving excellence in safety, quality throughout the entire value chain, eliminating waste from the value chain, delivering the right product at the right time, strong performance on environmental indicators, and technology in operations. Every company is required to evaluate its operational performance to design a program in order to enhance the efficiency in its functional departments (Böttcher et al., 2016). Top challenges of operations managers are maintaining the right inventory levels, ensuring quality, maximizing production, eliminate waste, eliminate the bottleneck, establishing technological systems, challenging the global competition and optimizing processes efficiency. An efficient manufacturing sector will help in generating more profits, increasing sales and having a sustainable economic growth (Asaleye et al., 2018). The problem in several manufacturing companies is that they focus on productivity and forget about efficiency.

According to Bohn the Vice President of Industry Cloud at SAP (2017), the most important sections to closely control and focus on in the manufacturing companies in order to generate more revenues are: focusing on the after sale service revenue, using a unique technological applications to motivate customer to share their experience and submit new, enhancing cross-industry information sharing related to the asset to increase the operational efficiency, starting to charge customers per usage of the service that is known as pay-per-use business models, buying new, customized, easy, and technological machines and systems to increase manufacturing agility and providing better services, applying artificial intelligence in company software and machine systems for a better manufacturing planning and scheduling. All these trends are hard to achieve if the company didn’t implement the existing software and technologies available in the market that are able to move the whole operational process from the old fashion to the new structure and practices in operations.

Furthermore, manufacturing resource planning (MRP) and Enterprise Resources Planning (ERP) are the most used software in manufacturing companies. MRP helps streamline the manufacturing process through production planning, scheduling, and inventory control. As for ERP, it is a software that integrates the functional departments of a business, such as sales, purchasing, accounting, Human Resource, customer support, CRM and inventory. It’s an integrated system of the company cross-functional departments as opposed to individual software designed specifically for the business process. By automating the critical workflow toward the company’s strategy, time will be reduced and human errors are decreased significantly thus eliminating costs (Peatfield, 2019). ERP systems take into consideration the importance of the supply chain in the manufacturing industry. The main concerns in a manufacturing industry are to always be on time for their customers, to never have an inventory shortage, to always track price fluctuation and pick the best purchasing price among all suppliers, to identify bottlenecks, to improve the delivery lead time and to maintain quality; all of these are achieved through a well-chosen ERP system that is implemented at the company that will lead to a better productivity and high customer satisfaction.

III. Assessment and evaluation

For a business to have competitiveness in today’s dynamic and complex world, cross-functional decision making and integration of organizational data are required. In this respect, we found a case study titled “Enterprise resource planning (ERP) in improving operational efficiency” to base our analysis with regard to the benefits of ERP in manufacturing companies and to discuss if it worth the cost of implementation.

1. Article Summary

Different operational, managerial and technological challenges are being faced by many organizations in developing countries, including the South African company considered in this research work that manufactures linen and uniform for the hospitality industry in Cape Town, South Africa.

In this article ERP was defined as an efficient information system which improves business competitiveness through cost reduction & better logistics, stating that it’s a method for effective planning and controlling of resources needed to manufacture and deliver products and services, achieved through a software which supports the integration of all organizational information, and considers each transaction as part of the interlinked processes that make up a business.

Composed of many modules, ERP increases operational transparency through a standard interface. The basic modules are identified as follow: ERP production module, ERP purchasing module, ERP inventory control module, ERP sales module, ERP marketing module, ERP financial module, and ERP human resources module; the role of these modules is to optimize the utilization of capacity, automate the processes for suppliers identification and price negotiation, facilitate processes for maintaining appropriate inventory level, order placements, scheduling & shipping, generate marketing leads & identify new trends, gather financial data and generate financial reports, and maintain employee database. The management at this company found that to be able to successfully implement the suitable ERP software, it is better to start with the Business process reengineering (BPR) in order to redesign of organizational processes to achieve improvement in service, speed, and cost. The role of ERP now is to consolidate the process adjustment with a software package using one integrated system, which helps manage all departments, ease day to day business, and increase profits. For that purpose, the managers designed an ERP flow chart to represent the flow of tasks according to the departments’ level. To meet the most of the benefits of an ERP and improve quality, efficiency, process flow and lead times, an External consultant was assigned to follow the process, assist and train employees, solve problems and monitor the implementation stage. In order to mathematically measure the operational efficiency, four ratios were periodically calculated: capacity, utilization, efficiency and, load percentage. The assessment was based on existing information and direct observation of the processes operating in the Design section, Pattern making section, Cutting section, production, and Technical Services section at the company. Productivity was recorded for analysis to develop a trend for a specific period. The main challenges of the company were to deliver on time, improve the capacity requirements planning because they were losing customers since they fail to meet demands and the delays in the processes due to the increase in capacity once the demand increase. Add to this, the untrained workers that were employed, the quantum of paperwork that travels between departments that delay fast decision making, the poor quality control that lacks technology for fast and accurate monitoring. Moreover, the labor cost was extremely high for the reason that at every time demand increases the actual staffs were unable to fulfill the orders, thus the HR department starts to hire new contracts workers.

Consequently, the recommendations for the ERP implementation stage at this company were: having updated technological PCs compatible with the ERP software, adding many computers with high capacity at different sectional departments in order to get fast decision making, train employees and management on Oracle software to get the utmost outcomes, capacity at the linen section has to be adjusted upward and add 4 full-time workers, as for the uniform section hire 1 more worker to meet existing load.

By implementing the ERP software, the company will be able to eliminate waste, defects, manage inventory and control labor and the work in progress through eliminating lead time.

This implementation would improve organizational competitiveness and would enhance communication and cooperation of all departments resulting in operational and employee efficiency since data could now be updated instantly resulting in minimization of waste resources.

2. Evaluation:

This article investigates the benefits of implementing ERP software in a South African company that manufactures linen and uniforms for the hospitality industry.

The company was unable to meet delivery dates and was seeking a permanent solution through proper implementation of ERP which was designed to reduce work in progress and working capital through the integration of firm’s activities and proper communication and collaboration between functional units. On the other hand, reduction of product cycle time was achieved by minimizing delays, coordination machine maintenance with production operations and optimizing space aiming to efficiently utilize available resources.

Hence, ERP software is a useful and very important tool for many organizations, through all the benefits that are following the implementation process from the operational, administrative and managerial section. Figure 1, shows all the benefits that a company can get from ERP adoption.

Figure 1: The Summarization of ERP benefits

Source: Sadrzadehrafiei et al., (2013). The Benefits of Enterprise Resource Planning (ERP) System Implementation in Dry Food Packaging Industry.)

However, there are also challenges that might cause failure when implementing ERP software. Failures could result from insufficient budget allocated to the implementation of ERP, poor planning, humble involvement of employees and un-entered data due to no enough training, squeezed deadlines by the upper management that make the project unachievable or the misfit of the ERP software with the business strategy, the type and the size of the business (Alhayek, 2017).

That’s why it is wise to look at the initial capital spent on the system since it could be very costly to some organization and the success of the implementation depends on many variables such as the skills and experience of the employees, communication between departments also plays an important role, the resistance in sharing information could reduce the efficiency of the software. Not to forget that the system can be difficult for the users if the company staff and managers are not well trained. All these obstacles in the ERP implementation stage could increase the costs generated from this project.

Running over budget is crucial for a company but does it indicates poor operational efficiency when implementing ERP?

In reference to the survey published in 2013 by Panorama USA where over 50% of projects experienced cost overruns and around 50% of respondents did not recover their costs. The same survey was conducted in 2017 and it shows that 74% of respondents overrun the budget that was allocated for the ERP software. This issue of exceeding the budget allocated for an ERP does not indicate the operational efficiency and companies’ satisfaction rate however in the same research in 2017, 78% of respondents found it beneficial for their business to adopt ERP software. Therefore, exceeding the budget assigned for ERP software may be the cause of several internal issues and boundaries that should be investigated and solved in order to adhere to the budget allocated.

IV. Conclusion and recommendations

Enterprise resources planning (ERP) is a tool that helps organizations to organize and plan their resources. It gives the firm the whole picture of the workflow and facilitates the communication and the cooperation between departments. The first consideration when planning to implement ERP software is to give the company the necessary time to change and get ready for the implementation to occur. Internal issues and boundaries should be examined and resolved before the implementation phase starts in order to prepare everyone inside the organization to accept the change and the benefits return that is expected.

For that reason, our recommendation for every manufacturing company is to think about implementing ERP software for all the benefits it includes. The management should not rush the project because it will lead to failure. For instance, when Hershey’s management in 1999 squeezed the deadline and forced everyone to finish the implementation and go live in 2.5years instead of 4 years, they faced a failure that costs the company a drop in its stock price of 35% and a drop in earnings by 18%. On the other hand, this implementation is a particular project for every organization, that’s why the firm should study all its aspects and what are the work types that should be mostly taken into consideration before and during the implementation phase.

Through all the different sizes of ERP software available in the market and the customization function offered with the different pricing schemes, allow us to say that the benefits of ERP worth the cost.

2019-3-27-1553681576

Colloids and their Preparation

1: Introduction

Thomas Graham in 1861 studied the ability of those dissolved substances capable to diffuse into water across a permeable membrane. He observed that crystalline substances likeC6H12O6, CH4N2O, and NaCl passed through themembrane, while others like glue, gelatin and gum arabic did not. The former he called crystalloids and the latter colloids (Greek, kolla= glue ;eidos= like). He thought that the difference in the behavior of ‘crystalloids’ and ‘colloids’ was because of the particle size. Later on it was realised that any sort of substance, despite of its nature, could be transformed into a colloid by sub-classifying it intoatoms or molecules of colloidal size.

2:Definition

A colloid is a matter in which one substance of minutely scattered insoluble particles is draped throughout another substance. Sometimes the scattered substance itself is called the colloid. [1] In a true solution as sugar or salt in H2O, the solute particles are spreaded in the solvent as alone molecules or ions. Thus the diameter of the scattered particles ranges from 1Å to 10 Å. [2] A colloid is a mixture that has particles range of between 1 and 1000 nm in diameter, besides are still able to remain evenly distributed throughout the solution. [3] 3:Types

As we have seen in the above lines, a colloidal system is made of 2stages. The substance classified as the colloidal particles is the Dispersed phase. The other continuous phase in which the colloidal particles are dispersed is the Dispersion medium. E.g., for a colloidal solution of Cu in H2O, Cu particles formed the dispersed phase and water the dispersion medium. As mentioned above, a colloidal system is made of a dispersed phase and the dispersion medium. B/c either the dispersed phase or the dispersion medium can be a gas, liquid or solid, there are 8 sorts of colloidal systems possible. A colloidal dispersion of one gas in other is not possible since the 2 gases would give a same molecular mixture. In this chapter we will condemn our study strictly to the colloidal systems which consist of a solid substance dissolved in a liquid. These are often referred to as Sols or Colloidal solution. The colloidal solutions in H2O as the dispersion medium are termed Hydrosols or Aquasols. When the dispersions medium is alcohol or benzene, the sols are referred to as Alcosolsand Benzosols respectively 4:LYOPHILIC AND LYOPHOBIC SOLS OR COLLOIDS[4]

Sols are colloidal systems in which a solid is dipped in a liquid.

These can be sub-classified into 2classes :

(a) Lyophilic sols (solvent-loving)

(b) Lyophobic sols (solvent-hating)

Lyophilic sols are those in which the scattered phase exhibits a definite affinity for the mediumor the solvent.

The examples of lyophilic sols are dispersions of starch, gum, and protein in water.

Lyophobic sols are those in which the dispersed phase has no attraction for the medium or thesolvent.

The examples of lyophobic sols are dispersion of Au, Fe2O3 and S in H2O. The affinity or attraction of the sol particles for the medium, in a lyophilic sol, is due to hydrogen bonding with H2O. If the dispersed phase is a protein (as in egg) hydrogen bonding takes place between H2O molecules and the amino groups ( –NH–, –NH2) of the protein molecule. In spreading starch in H2O, hydrogen bonding occurs between H2O molecules and the – OH groups of the starch molecule. There are no similar forces of attraction when S or Au is mixed in water.

5:CHARACTERISTICS OF LYOPHILIC AND LYOPHOBIC SOLS[5] Some features of lyophilic and lyophobic sols are as follows (1) Ease of preparation

Lyophilic sols can be obtained easily by mixing the material (starch, protein) with an appropriate solvent. The macro molecules of the material are of colloidal size and these at once pass into the colloidal form on account of interconnection with the solvent.

Lyophobic sols are not getting able by simply mixing the solid material with the solvent.

(2) Charge on particles

Particles of a hydrophilic sol may have either small or no charge at all Particles of a hydrophobic sol carry +ve or -ve charge which gives them stability.

(3) Solvation

Hydrophilic sol particles are generally solvated. That is, they are bounded by an adsorbed layer of the dispersion medium which does not allow them to come, gather and coagulate. E.g. Hydration of gelatin .

There is no solvation of the hydrophobic sol particles for want of interaction with the medium.

(4) Viscosity

Lyophilic sols are thick as the particle size goes up due to solvation, and the proportion of free medium goes down. Warm solutions of the dispersed stage on cooling set to a gel E.g., preparation Of table jelly.

Viscosity of hydrophobic sol is almost the same as of the dispersion medium itself.

(5) Precipitation

Lyophilic sols are precipitated (or coagulated) only by high concentration of the electrolytes when the sol particles are mixed.

Lyophobic sols are precipitated even by low concentration of electrolytes, the protective layer being not present.

(6) Reversibility

The dispersed phase of lyophilic sols when distinguished by coagulation or by evaporation of the medium, can be changed again into the colloidal form just on dissolving with the dispersion medium. Therefore this type of sols are designated as Reversible sols.

On the other hand, the lyophobic sols once precipitated cannot be changed again merely by mixing With dispersion medium. These are, therefore, called Irreversible sols.

(7) Tyndall effect Due to relatively tiny particle size, lyophilic sols do not scatter light and show no Tyndall effect.

Lyophobic sol particles are big enough to exhibit tyndall effect.

(8) Migration in electronic field

Lyophilic sol particles (proteins) migrate to anode or cathode, or not at all, when placed in electric field.

Lyophobic sol particles move either to anode or cathode, according as they carry -ve or +ve charge.

6:PREPARATION OF SOLS[6]

Lyophilic sols may be prepared by simply warming the solid with the liquid dispersion mediumE.g., starch with H2O. On the other hand, lyophobic sols have to be prepared by special methods.

These methods fall into 2categories :

(1) Dispersion Methods in which biggermacro-sized particles are split down to colloidal size.

(2) Gathering Methods in which colloidal size particles are made up by gathering single ions or molecules.

DISPERSION METHODS

In these methods, material in excess is dispersed in other medium.

(1) Mechanical dispersion By Using Colloid Mill

The solid along with the liquid dispersion medium is supplied into a Colloid mill. The mill containing two steel plates almost touching each other and rotating in anti directions with great speed. The solid particles are ground down to colloidal size and are then vanished in the liquid to give the sol.

Colloidal graphite’ (a lubricant) and printing inks are made by this method.Now a days, mercury sol has been manufactured by shattering a layer of mercury into sol particles inH2O by means of US(ultra sonic) vibration..

(2) Bredig’s Arc Method

It is used for making hydrosols of metals e.g., Ag, Au and Pt. An arc is struck in mid of the two metal electrodes held close together underde-ionized water. The H2O is kept cold by occupying the container in ice/water bath and a trace of alkali (KOH) is added. The high heat of the spark across the electrodes evaporates some of the metal and the haze condenses under H2O. Thus the atoms of the metal present in the hazes aggregate to form colloidal particles in H2O. Since the metal has been basically converted into sol particles (via metal vapour), thismethod has been treated as of dispersion.

Non-metal sols can be made by suspending coarse particles of the substance in the dispersionmedium and striking an arc between iron electrodes.

(3) By Peptization

Some newly precipitated ionic solids are dispersed into colloidal solution in H2O by thecombination of small quantities of electrolytes, eventually those containing a same ion. The pptsadsorbs the common ions and electrically charged particles then break from the precipitate as colloidalparticles.

The diffusal of a precipitated material into colloidal solution by the action of an electrolyte in solution, is termed peptization. The electrolyte used is called a peptizing agent.

Peptization is the alter of coagulation of a sol.

Examples of preparation of sols by peptization

(1) Silver chloride, Ag+Cl–, can be converted into a sol by combining hydrochloric acid (Cl– being common ion.)

(2) Ferric hydroxide, Fe(OH)3, give a sol by adding ferric chloride (Fe3+ being common ion).

AGGREGATION METHODS

These methods consists of chemical reactions or change of solvent whereby the atoms or molecules of the diffusal phase appearing 1st, coalesce to form colloidal particles.

The states (temperature, concentration, etc.) used are such as permit the making of sol particles but stops the particles becoming too huge and forming ppt. The ions which aren’t required (spectator ions) present in the sol are eradicated by dialysis as these ions may finally coagulate the sol.

The more chief methods for preparing hydrophobic sols are as follows : (1) Double Decomposition

An (As2S3) sol is prepared by passing a slow stream of H2S gas through a cold solution of (As2O3). This will prolong till the yellow colour of the sol gets max intensity.

As2O3 + 3H2S ⎯⎯→As2S3 (sol) + 3H2O

Too much H2S (electrolyte) is erased by passing in a stream of H2 gas.

(2) Reduction

Silver sols and gold sols can be achieved by the reaction of dilute solutions of AgNO3 or AuCl2 with organic reducing agents like tannic acid or methanal (HCHO) AgNO3 + tannic acid ⎯⎯→Ag sol

AuCl3 + tannic acid ⎯⎯→Au sol

(3) Oxidation

A sol of S is produced by passing H2S into a solution of SO2.

2H2S + SO2 ⎯⎯→2H2O + S↓

In qualitative analysis, S sol is frequently encountered when H2S is passed through the solution to make ppt of group 2 metals if an oxidizing agent (Chromate or ferric ions) happen to be present. It can be removed by boiling (to coagulate the sulphur) and filtering through two filter papers folded together.

(4) Hydrolysis

Sols of the hydroxides of Fe, Cr and Al are readily produced by the hydrolysis of salts of the respective metals. In order to obtain a red sol of Fe(OH)3, a few drops of 30% FeCl3 solution is added to a large volume of nearly boiling water and mixed with a glass rod.

FeCl3 + 3H2O ⎯⎯→Fe(OH)3 + 3HCl

red sol

(5) Change of Solvent

When a solution of Sor resin in C2H5OH is added to an excess of H2O, the S or resin sol is formed owing to goes down in solubility. The matter is present in molecular state in C2H5OH but on transference to water, the molecules precipitate out to form colloidal particles.

7:PURIFICATION OF SOLS[7]

In the methods of preparation as mentioned above, the obtained sol frequently contains besides colloidal particles appreciable amounts of electrolytes. To get the pure sol, these electrolytes have to beerased. This purification of sols can be achieved by 3methods : (a) Dialysis

(b) Electrodialysis

(c) Ultrafiltration

Dialysis

Animal membranes (bladder) or those made of parchment paper and cellophane sheet, have very fine holes. These holes allow ions (or small molecules) to pass through but not the large colloidal particles. When a sol containing vanished ions (electrolyte) or molecules is placed in a bag of permeable membrane immersed in pure water, the ions spread through the membrane. By using a continuous flow of fresh water, the concentration of the electrolyte which is not inside the membrane becomes almost zero. Thus diffusion of the ions into pure H2O remains brisk all the time. In this way, practically all the electrolyte present in the sol can be erased effortlessly.

The phenomena of erasing ions (or molecules) from a sol by spreading through a permeable membrane is called Dialysis. The device used for dialysis is called a Dialyser.

Example.A Fe(OH)3 sol (red) form by the hydrolysis of FeCl3 will be dissolved with someHCl acid. If the impure sol is placed in the dialysis bag for small time, the outside water will give a white ppt with AgNO3. After a bit of long time, it will be found that almost the whole of HCl acid has been erased and the pure red sol is left in the dialyser bag.

Electrodialysis

In this operation, dialysis is carried under the influence of electric field (Fig. 22.8). Potential is forced between the metal screens giving support to the membranes. This speeds up the transfer of ions to the opposite electrode. Hence dialysis is accelerated. Evidently electrodialysis is notmeant for nonelectrolyte impurities like sugar and urea.

Ultrafiltration

Sols pass through a simple filter paper, Its holes aretoo huge to maintain the colloidal particles. However, if thefilter paper is infused with collodion or a regeneratedcellulose such as cellophane or visking, the hole size ismuch minute. Such a modified filter paper is called anultrafilter.The dissociation of the sol particles from the liquidmedium and electrolytes by filtration with the help of an ultrafilteris called ultrafiltration.

Ultrafiltration is a steady process. Gas pressure (orsuction) has to be forced to speed it up. The colloidal particles are left on the ultrafilter in the form of slime. Theslime can be mixed into fresh medium to get back the puresol. By the help of graded ultrafilters, the technique ofultrafiltration can be employed to separate sol particles ofvarious sizes.

8:STABILITY OF SOLS[8]

A true colloidal solution is stable. Its particles don,t ever coalesce and separate out. The stability of sols is because of 2factors : (1) Presence of like charge on sol particles

The dispersed particles of a hydrophobic sol contains a like electrical charge (all +ve or all -ve) on their surface. Since same charges repel each other, the particles push away from each other and resist joining together. However, when an electrolyte is mixed to a hydrophobic sol, theparticles are discharged and ppt formed.

(2) Presence of Solvent layer around sol particle

The lyophilic sols are stable for 2 reasons. Their particles possess a charge and in additionhave a layer of the solvent bound on the surface. E.g, a sol particle of gelatin has a -vecharge and a water layer envelopes it. When NaCl is mixed with colloidal solution of gelatin,its particles are not precipitated. The H2O layer around the gelatin particle doesn,t permit the Na+ ions to run into it and discharge the particle. The gelatin sol isn,t precipitated by mixing ofNaCl solution. actually, lyophilic sols are more stable than lyophobic sols.

9:ASSOCIATED COLLOIDS[9]

The molecules of substances as soaps and artificial detergents are tinier than the colloidalparticles. Whereas in concentrated solutions these molecules form aggregates of colloidal size.Substances whose molecules combines spontaneously in a given solvent to form particles of colloidaldimensions are Associated or Association Colloids.

The colloidal aggregates of soap or detergent molecules formed in the solvent are referred to asmicelles.

Explanation.Soap or detergent molecule ionises in water to form an anion and Na ion.Thus sodium stearate (a typical soap) furnishes stearate anion and sodium ion in aqueoussolution.

C17H35COO– Na+ ⎯⎯→C17H35COO– + Na+

Sodium stearate Stearate ion

As many as seventy stearate ions aggregate to form a micelle of colloidal size. The stearate ion has along hydrocarbon chain (17 carbons) with a polar —COO– group at 1 end. The zigzag hydrocarbontail is given by a wavy line and the polar head by a hollow circle. In the micelle formation, the tails being insoluble in H2O are directed to the centre, while the soluble polar heads are on the surface in contact with H2OThe charge on the micelle because of the polar heads accountsfor the stability of the particle.

Cleansing Action of Soaps and Detergents

The cleansing action of soap is due to

(1) Solubilisation of grease into the micelle

(2) Emulsification of grease

Solubilisation.

In relatively strong solution the soap (or detergent) anions directlyform a micelle. The hydrocarbon tails are in the inner side of the micelle and COO– ions on the surface.The grease stain is then absorbed into the interior of the micelle which behaves like liquid hydrocarbons. As the stain is removed from the fabric, the dirt particles sticking to the stain are alsoerased.

Emulsification.

As already discussed the soap or detergent molecules are ionised in H2O. Theanions are made of oil-soluble hydrocarbon tails and water-soluble polar heads. Thus soap anionhas a long hydrocarbon tail with a polar head, —COO–. When soap solution is amixedwith a fabric, the tails of the soap anions are fixed into the grease stain. The polar heads started from the greasesurface and form a charged layer around it. Thus by mutual repulsions the grease droplets are suspended in H2O. The emulsified grease stains are cleaned away with soap solution.

10:EMULSIONS[10]

These are liquid-liquid colloidal systems. In other words, an emulsion is adispersion of finely divided liquid droplets in another liquid.

Generally 1 of the 2 liquids is H2Oand the other, which is immiscible with H2O, isdesignated as oil. Either liquid can make the dispersed phase.

Types of Emulsions

There are 2 types of emulsions.

(a) Oil-in-Water type (O/W type)

(b) Water-in-Oil type (W/O type)

Examples of Emulsions

(1) Milk is an emulsion of O/W type. Tiny droplets of liquid fat are added in H2O.

(2) Stiff greases are emulsions of W/O type, H2O being dispersed in lubricating oil.

Preparation of Emulsions

The dispersal of a liquid in the form of anemulsion is called emulsification. This can bedone by agitating a tiny amount of 1liquid with the bulk of the other. It is better achieved by passing a mixture of the 2liquid through a colloid mill known ashomogenizer.The emulsions can simply be obtain byshakingthe 2 liquids are unstable. The droplets ofthe dispersed phase coalesce and make a separate layer. To have a stable emulsion, tinyamount of a 3rd substance called theEmulsifier or Emulsifying agent is addedduring the preparation. This is usually a soap,synthetic detergent, or a hydrophilic colloid.

Role of Emulsifier

The emulsifier concentrates at the interface and decreases surface tension on the side of 1liquid which rolls into droplets. Soap, E.g., is made of a long hydrocarbon tail (oil soluble)with a polar head —COO–Na+ (water soluble). In O/W type emulsion the tail is pegged into the oildroplet, while the head prolongs into H2O. Thus the soap acts as go-between and the emulsifieddroplets are not permitted to coalesce.

Properties of Emulsions

(1) Demulsification.

Emulsions can be broken or ‘demulsified’ to get the constituent liquids byheating, freezing, centrifuging, or by addition of appreciable amounts of electrolytes. They are alsobroken by destroying the emulsifying agent. For example, an oil-water emulsion stabilized by soap isbroken by addition of a strong acid. The acid converts soap into insoluble free fatty acids.

(2) Dilution.

Emulsions can be diluted with any amount of the dispersion medium. On the other hand the dispersed liquid when mixed with it will at once form a separate layer. This property ofemulsions is used to detect the type of a given emulsion 11:GELS[11]

A gel is actually a jelly-like colloidal system inwhich a liquid is dissolved in a solid medium.E.g., when a warm sol of gelatin is cooled,it lays to a semisolid mass which is a gel. This operation of a gel making is known as Gelation.Explanation.Gelation may be consider aspartial coagulation of a sol. The coagulating solparticles first combine to make long thread-likechains. These chains are then interconnected to forma solid framework. The liquid dispersion mediumgets locked in the spaces of this framework.

The resulting semisolid porous mass has a gel likestructure. A sponge absorbed in water is example of gel structure.

Two sorts of Gels

(a) Elastic gels are those which contains elastic properties. They convert their form onapplying force and change back to original shape when the force is eraased. Gelatin, starch and soaps areillustrations of substances which make elastic gels.Elastic gels are formed by cooling highly concentrated lyophilic sols. The bondings or links between themolecules (particles) are due to electrical attraction and are not hard.

(b) Non-elastic gels are those which are harde.g., silica gel. These are formed by appropriatechemical action. Thus silica gel is made by adding concentrated HCl acid to sodiumsilicate solution of the accurate concentration. The resulting molecules of silicic acid polymerise to make silica gel. It has a network connected by covalent bonds which give a strong and hard structure.

Properties of Gels

(1) Hydration.

A fully dehydrated elastic gel can be reproduced by addition of H2O. But once a nonelastic gel is freed from moisture, addition of H2O will not bring about gelation.

(2) Swelling.

Partially dehydrate elastic gels imbibe H2O when dipped in the solvent. This causes increase in the volume of the gel and process is called Swelling.

(3) Syneresis.

Many inorganic gels on standing reduce in size which is achieved by exudationof solvent. This process is called Syneresis.

(4) Thixotropy.

Some gels are semisolid when at rest but revert to liquid sol on agitation. This reversible sol-gel conversion is referred to as Thixotropy. Iron oxide and silver oxide gels exhibit this property. The modern thixotropic paints are also an example.

12:EXAMPLES[12]

Some Examples of Colloids are as follows

Dispersion Medium Dispersed Phase Type of Colloid Example Solid Solid Solid sol Ruby glass

Solid Liquid Solid emulsion/gel Pearl, cheese

Solid Gas Solid foam Lava, pumice

Liquid Solid Sol Paints, cell fluids

Liquid Liquid Emulsion Milk, oil in water

Liquid Gas Foam Soap suds, whipped cream

Gas Solid Aerosol Smoke

Gas Liquid Aerosol Fog, mist

13:APPLICATIONS[13]

Colloids play an essential role in our daily life and industry. A knowledge of colloid chemistry isimportant to understand some of the different natural phenomena around us. Colloids make up some of our modern products. some of the important applications of colloids are as follows.

(1) Foods

Many of our foods are colloidal in nature. Milk is an emulsion of butterfat in H2O secured by a protein, casein. Salad dressing, gelatin deserts, fruit jellies and whipped cream are other examples.

Ice cream is a dispersion of ice in cream. Bread is a dispersion of air in baked dough.

(2) Medicines

Colloidal medicines being finely divided, are more useful and are easily absorbed in oursystem. Halibut-liver oil and cod-liver that we take are, actually, the emulsions of the respective oils in wH2O. Many ointments for application to skin consist of physiologically active parts mixedin oil and made into an emulsion with H2O. Antibiotics such as penicillin and streptomycin areformed in colloidal form suitable for injections.

(3) Non-drip or thixotropic paints

All paints are colloidal dispersions of solid pigments in a liquid medium. The modern nondrip orthixotropic paints also contain long-chain polymers. At rest, the chains of molecules are coiled andentrap much dispersion medium. Thus the paint is a semisolid gel structure. When shearing stress isapplied with a paint brush, the coiled molecules straighten and the entrapped medium is released. Assoon as the brush is removed, the liquid paint reverts to the semisolid form. This renders the paint‘non-drip’.

(4) Electrical precipitation of smoke

The smoke coming from industrial plants is a colloidal dispersion of solid particles (carbon, arsenic compounds, cement dust) in air. It is a nuisance and damges the atmosphere. So,beforepermitingthe smoke to escape into air, it is treated by Cottrell PrecipitatorThe smoke is let past a series of sharp points charged to a high potential (20,000 to 70,000 V). Thepoints discharge high velocity electrons that ionise molecules in air. Smoke particles adsorb these+ve ions and become charged. The charged particles are attracted to the oppositely chargedelectrodes and forms ppt. The gases that leave theCottrell precipitator are thus freed fromsmoke. In addition, valuable materials may be secured from the precipitated smoke. For example,arsenic oxide is mainly recovered from the smelter smoke by this method.

(5) Clarification of Municipal water

The municipal water obtained from natural sources often consists ofcolloidal particles. The phenomenaof coagulation is used to eradicae these. The sol particles carry a -ve charge. When aluminiumsulphate(alum) is added to H2O, a gelatinous precipitate of hydrated aluminium hydroxide (floc) isproduced Al3+ + 3H2O ⎯⎯→Al(OH)3 + 3H+

Al(OH)3+ + 4H2O + H+ ⎯⎯→Al(OH)3(H2O)4

+

+vely charged flocattracts to it -tive sol particles which are coagulated. The flocalong with the suspended matter comes down, leaving the H2O clear (6) Formation of Delta

The river water consists of colloidal particles of sand and clay which carry -ve charge. Thesea water, on the other hand, contains cations ions such as Na+, Mg2+, Ca2+. As the river water meets sea water, these ions discharge the sand or clay particles which formsppt as delta.

(7) Artificial Kidney machine

The human kidneys cleans the blood by dialysis through natural membranes. The toxic waste products such as urea and uric acid pass through the membranes, while colloidal-sized particles ofblood proteins (haemoglobin) are retained. Kidney failure, therefore, leads to death due to accumulation of poisonous waste products in blood . Now-a-days, the patient’s blood can be cleansed by shuntingit into an ‘artificial kidney machine’. Here the impure blood is form to pass through a series ofcellophane tubes surrounded by a washing solution in H2O. The toxic waste chemicals (urea, uricacid) diffuse across the tube walls into the washing solution. The cleaned blood is changed back to thepatient. The use of artificial kidney machine saves the life of 1000 of persons each year.

(8) Adsorption indicators

These indicators function by preferentialadsorption of ions onto sol particles. Fluorescein(Na+Fl) is an example of adsorption indicatorwhich is used for the titration of NaCl solution against AgNO3 solution.When AgNO3 solution is run into asolution of NaCl containing a littlefluorescein, a white precipitate of AgNO3 is first produced. At the end-point, the whiteprecipitate chnges sharply pink.

Explanation.

The indicator fluorescein is adye (Na+Fl–) which gives coloured anion Fl– inaqueous solution. The white precipitate of AgCl formed by running AgNO3 solution intoNaCl solution is partially colloidal in nature.

(a) Before the end-point,

Cl– ions are inexcess. The AgCl sol particles adsorb these ionsand become -vely charged. The -veAgCl/Cl– particles cannot adsorb the colouredfluorescein anions (Fl–) due to electrostatic repulsion. Thus the precipitate remains white.

(b) After the end-point,

Ag+ ions become in excess. AgCl sol particles adsorb these andacquire +ve charge. The +veAgCl/Ag+ particles now attract the coloured fluorescein anions(Fl–) and turn rose-red.

Thus the end-point is marked by white precipitate changing to pink.

(9) Blue colour of the sky

This is an function of Tyndall consequence. The higher ambiance contains colloidal dirt or frost particles discrete in atmosphere. As the sun energy enters the air (Fig. 22.33) these hit the colloidal particles. The particles soak up daylight and disperse glow of blue color (4600–5100Å). The light that is occurrence at earth’s surface is considerably reddened due to the removal of most of the blue light in the higher air.

14: References

[1] :http://www.britannica.com/science/colloid

[2] : essentials of physical chemistry ArunBahl , BsBahl , G.D. Tuli , S.Chand [3] :http://chemwiki.ucdavis.edu/Core/Physical_Chemistry/Physical_Properties_of_Matter/Solutions_and_Mixtures/Colloid [4] :http://www.chemistrylearning.com/lyophobic-colloid/ http://www.chemistrylearning.com/lyophilic-colloids/ [5] :http://www.chemistrylearning.com/difference-between-lyophobic-and-lyophilic/ [6] :http://chemistry-desk.blogspot.com/2013/08/preparation-of-colloids.html [7] :http://www.chemistrylearning.com/purification-of-colloids/ [8] :http://www.emedicalprep.com/study-material/chemistry/surface-chemistry/sols-stability.html [9] :http://encyclopedia2.thefreedictionary.com/Association+Colloid [10] :https://en.wikipedia.org/wiki/Emulsion

[11] :https://en.wikipedia.org/wiki/Gel

[12] :http://chemwiki.ucdavis.edu/Core/Physical_Chemistry/Physical_Properties_of_Matter/Solutions_and_Mixtures/Colloid [13] :http://www.chemistrylearning.com/applications-of-colloids/ 2016-3-14-1457969076

Sphingolipids: essay help site:edu

Summary

In this project is the effect of cholesterol at ceramide converting in sphingomyeline, glucosylceramide, ceramide 1-fosfaat and sphingosine in SK-N-AS and HeLa cells discovered. Those sphingolipids are found in mammalian cell membranes are involved in a variety of different functions like first/second messengers, membrane lipid rafts, and in a lot of different signalling pathways (See figure 2)

Cholesterol is also an important part of the mammalian cell membrane. It is involved in maintenance and stability of the membrane and in synthase of important molecules like vitamin D and steroid hormones. Cholesterol is synthesised by the liver but is also a present in some nutrition. Cholesterol and sphingomyelin have a high affinity for each other, because of van der Waals interactions. Recent studies show the effect of cholesterol at sphingomyelin synthase. SMase activity showed a strongly negative correlation between SMase activity and the Cholesterol/Protein ratio.

Two types of tumour cells will be tested SK-N- AS which are from a patient with neuroblastoma. And HeLa cells which are from a patient with cervical carcinoma. When sphingolipid metabolism is dysregulated, sphingolipid metabolism is associated with the pathogenesis and development of various types of cancers. Sphingosine 1-fosfaat is influencing cell growth in neuroblastoma cells. Sphingomyeline is influencing cell growth in cervical carcinoma cells.

Introduction

Ceramide is converting into other sphingolipids like sphingomyelin, sphingoceramide and sphingosine. It is known that sphingomyelin synthase is affected by cholesterol. In this project the effect of cholesterol at the converting of ceramide in sphingomyelin, sphingoceramide and sphingosine in SKNAS and HELA cells is examined. Cholesterol in the cells will be decreased by exposing cells to methyl-beta-cyclodextrin and will be increased by exposing cells to cholesterol-methyl-beta-cyclodextrin inclusion complexes. Ceramide will be coloured with C6-NBD. With thin-layer chromatography can be analysed in which sphingolipids ceramide is converted. With protein-analyses can be determined the amount of cells each wells so the samples can be compared to each other.

Main question what is the effect of cholesterol at converting ceramide in sphingomyelin, glucosylceramide, ceramide 1-phosphate and sphingosine in SKNAS and HELA cells?

Hypothesis: Sphingomyelin synthase will be influenced by cholesterol. More cholesterol means more sphingomyelin. Cholesterol has no effect at the synthase of other sphingolipids. HeLa cells are more influenced by cholesterol than SK-N-AS cells.

Sphingolipids

The sphingolipids are named by J.L.W.Thudichum in 1884, because of their mysteriousness and because Thudichum was a fan of the sphinx from Greek mythology, he named it after the sphinx.

Sphingolipids are a form of lipids (fat like molecules often but not exclusively seen in cell membranes) which are characterized by their backbone, consisting of eighteen carbon amino alcohol bases. They are synthesized in the Endoplasmic reticulum from non-sphingolipid precursors. This family of lipids play an important role in membrane biology and is involved in various different cell signalling pathways. (Gault CR, 2010)

The three prime lipid classes (glycerolipids, sphingolipids and sterols) in animal cells membranes are widely known. But most don’t realize how many combinations there are in the fundamental structure in each lipid class, which is relevant in the specialization of the lipid. There were scientist who believed sphingolipids mainly existed to keep the cell membrane stable, but some studies have elevated these sphingolipids to play a meaningful part in some biological mechanisms. Furthermore there are 10^9 lipid molecules in a small animal cell. This begs the question what exactly are the functions of sphingolipids?

Pretty much all membrane lipids are amphipathic (which means they are both hydrophilic and hydrophobic) The hydrophilic region is made up of phosphate groups, sugar residues, and/or hydroxyl groups. And the hydrophobic part consist of a long-chain base (sphingoid base) which can have two or even three hydroxyl groups. (Anthony, 2004)

Figure 1: structure of some sphingolipids, in blue only one kind of sphingoid base (sphingosine),in red only one kind of fatty acid (palmitic acid). (Anthony, 2004)

Functions of sphingolipids

Sphingolipids are known to act as both first and second messengers in several signalling pathways. First messengers are extracellular factors like hormones or neurotransmitters that can trigger biological effects in the cell like growth immune responses etc. Second messengers are intracellular signalling molecules released by the cell to trigger biological effects such as proliferation, differentiation etc. (figure 2)

Figure 2: Different participation of sphingolipids in cell biology signalling. (Obeid, 2008)

They also play an important role in the membrane lipid rafts. These are plasma membranes containing combinations of glycosphingolipids and protein receptors organized in glycolipoprotein microdomains. Which function as centres for protein sorting and signal transduction. In these rafts sphingomyelin levels are increased by about half in comparison to plasma membranes. (Pike, 2009)

Metabolic pathway of sphingolipids

In the sphingolipid metabolic pathway ceramide is at the centre of it all being synthesized from the molecules palmitate and serine (de novo which means newly made from simple molecules), which reduces to form 3-keto-dihydrosphingosine, which reduces to dihydrosphingosine, dihydrosphingosine is acetylated trough dihydro-ceramide synthase (CerS), and becomes dihydroceramide, followed by desaturase into finally ceramide.

It can also form by sphingomyelinase of dihydrosphingomyelin to dihydroceramide followed by desaturase to ceramide, and by sphingomyelinase of sphingomyelin to ceramide. This last one can go both ways ceramide can become sphingomyelin through sphingomyelin synthase (SMS).

It can form through glucosylceramidase (GCase) from glucosylceramide, and again both ways through glucosylceramide synthase (GCS).

Ceramide can form from ceramide 1-phospate with phosphatase and back again through a specific ceramide kinase (CK).

And lastly it can form from sphingosine 1-phosphate through sphingosine-1-phosphate phosphatase (SPPase) it becomes sphingosine which in turn becomes ceramide with ceramide synthase (CerS)

Figure 3: Sphingolipids metabolism with their related enzymes. (Obeid, 2008)

Ceramide

Ceramide is one of the plainest sphingolipid as seen in figure 1 it is only composed of a sphingosine and a fatty acid. It has been suggested that the fatty acid can largely decide the function and pathway of this type of sphingolipid. Which fatty acid is N-acylated to the long chain-base is being decided through various genes so they are responsible for several distinct ceramide groups. The distribution of these genes to various tissues suggests some tissues need precise behaviour from these ceramide. The main pattern of these ceramide may be portrayed, but some factors are still unclear of these sphingolipids, like how the specific enzymes are regulated and how many separate enzymes can influence the same pathway.

Ceramide is synthesized in the endoplasmic reticulum, it uses the cells own components like vesicular transport but also different transport methods to move from the endoplasmic reticulum to the golgi apparatus. It uses these methods because it can’t move through the cytosol by itself since it has hydrophobic properties. (Anthony, 2004)

How can ceramide synthesis be qualified?

Ceramide synthesis can be qualified by using NBD C6-Ceramide (6-((N-(7-Nitrobenz-2-Oxa-1,3-Diazol-4-yl)amino)hexanoyl)Sphingosine) which will emit a green fluorescence. Its absorption maxima is 466nm and its emitting maxima is 536nm. There are several experiments to qualify ceramide synthesis like:

TLC, (thin layer chromatography)
HPLC, (High-performance liquid chromatography)
Mass spectrometry, (Cremesti, 2000)

Glucosylceramide

Glucosylceramide is a more complex sphingolipid, it has a carbohydrate head group which is attached to the 1-hydroxy group of ceramide. The most simple glucosylceramide are galactosylceramide and glucosylceramide. At which galactosylceramide has a glucose group, and glucosylceramide a galactose group. From these glucosylceramide more complex sphingolipids can be synthesized through adding additional glycose sub-units. (Obeid, 2008)

Ceramide 1-fosfaat

Ceramide-1-phosphate is formed from ceramide by a specific ceramide kinase (CK) as stated in ‘What is the metabolic pathway of sphingolipids?’. Thus far the single enzyme known to produce Ceramide-1-phosphate in mammalian cells is CK.

This ceramide is involved in the mitogenesis (induction of mitosis) of certain cells, as well as having antiapoptic effects. Ceramide-1-phosphate is involved in inflammatory reactions also. The effects of Ceramide-1-phosphate are mostly active in intracellular compartments. (G”mez-Mundoz, 2010)

Sphingomyelin

Sphingomyelin is named by the myelin sheets around nerve cells. Sphingomyelin is mostly found in myelin sheets, but is also found in other cell membranes. The only phospholipid which is also a sphingolipid is sphingomyelin. One of the important nerve cell membrane components is sphingomyelin. Sphingomyelin synthase(SMS) catalyses the transfer of phosphoryl choline from phosphatidylcholine to a ceramide. SMS1 and SMS2 are the human SMS genes.

The SMS1 gene is find in the trans-Golgi apparatus is encoded. the SMS 2 gene is associated with the plasma membrane.

Figure 4. synthase and degradation of sphingomyelin by the enzymes sphingomyelin synthase(SMS) and sphingomyelinase (ASMase/SMase)

Sphingomyelinase causes the release of ceramide and phosphocholine. Sphingomyelin is converted to ceramide because of this reaction. Sphingomyelinase is also called ASMase or aSMase because sphingomyelinase functions at acidic pH.

The synthase of sphingomyelinase is seen in figure 4. (King(pHd), 2016)

Sphingomyelin has a few different functions. Myelin sheets around nerve cells has a lot of sphingomyelins. This suggests their function as insulator of nerve fibers. (Voet, Voet, & Pratt, 2008) Sphingomyelin is also found in the plasma membranes of other cells. Sphingomyelin is in the plasma membrane also important for entering iron into cells. Sphingomyelin has also a function in the activity of some membrane-bound proteins including a certain receptors and ion channels. The most important sphingolipid in the nucleus is sphingomyelin because it is involved in chromatin dynamics. (Christie, 2014)

Sphingosine

The initiation of sphingosine synthesis takes places via condensation of serine and palmitoyl-CoA. This reaction is catalysed by the enzyme serine palmitodyltransferase(SPT). Hereby is 3-ketosphinganine(3-ketodihydrosphingosine) formed. SPT contains two main subunits. SPTLC1 and SPTLC2. The isoform of SPTLC2L isoform is also called SPTLC3. SPTLC1 is found in active SPT enzymes. Some tissues contains SPTLC2 subunits, and other contains SPTLC3 subunits. LC means Long-Chain subunit. 3-ketosphinganine is converted into sphinganine (dihydrosphingosine) by 3-ketosphinganine reductase. Sphinganine is converted into dihydroceramide by the enzyme ceramide synthase. Dihydroceramide is converted into ceramide by the enzyme dihydroceramide.

Through the action of ceramide synthases and ceramidases, sphingosine has a function as substrate for ceramide synthesis, and ceramide has a function as substrate for sphingosine synthesis. (King(pHd), 2016)

The synthase of sphingosine is shown in figure 5

Figure 5. the synthase of sphingosine. The initiation of sphingosinesynthese starts with condensation of serine and palmitoyl-CoA. The synthase continuous by the help of 3-ketosphinganine reductase, ceramide synthase, dihydroceramide desaturase and ceramidase. (King(pHd), 2016)

Sphingosine is converted into sphingosine 1-phosphaat via sphingosine kinase. Sphingosine 1-phosphaat can be converted in sphingosine via different phosphatases.

Sphingosine 1-phosphate (S1P) is released to the extracellular space. After that, it binds to specific receptors on the plasma membrane of target cells. S1P has important roles in differentiation, migration and cell proliferation. (Tsuyoshi Nishia, 2013) Because of its role in cell differentiation, S1P also has a role in cell differentiation in cancer cells.

SK-N-AS and HeLa cells

In this experiment two types tumour cells are tested. SK-N-AS cells are originally from a female patient with neuroblastoma from the metastasis of the bone marrow. (Aldrich, 2016) Neuroblastoma is a tumour cancer which is mostly located in the nerve tissue of the adrenal gland but is also found in nerve tissues in other body parts. Approximately 15 percent of all childhood cancer deaths is caused by neuroblastoma. (Mehrdad Rahmaniyan, 2012)

When sphingolipid metabolism is dysregulated, sphingolipid metabolism is associated with the pathogenesis and development of various types of cancers.

In neuroblastoma cells and tissues is sphingosine kinase 2 is highly expressed. Spingosine-1-phosphaat(SIP) is the product of sphingosine kinase 2. Vascular endothelial growth factor (VEGF) expression is induced by spingosine-1-phosphaat. VEGF is a factor who is regulating the process of angiogenesis. Angiogenesis is an essential factor for metastasis and tumour growth. VEGF expression is induced via HIF-1-” independent pathway. Spingosine-1-phosphate receptor 2(S1P2) correlates with VEGF mRNA expression. This suggest that the VEGF/S1P/S1P2 pathway may promote neuroblastoma growth. (Mehrdad Rahmaniyan, 2012)

HeLa cells are originally from a 31 year old female with cervical carcinoma. (Aldrich s. , 2016) Sphingomyelin synthase 1 and sphingomyelin synthase 2 are co-expressed and function as golgi-and plasma membrane-associated SM synthase in hum cervia carcinoma cells. RNA interference-mediated decrease of sphingomyelin synthase-1 and sphingomyelin synthase-2 caused sphingomyelin production, accumulation of ceramide and a block in cell growth. (Fikadu Geta Tafesse, 2007)

Cholesterol

All human cells have plasma membranes. The fundamental structure is the phospholipid bilayer. Exist the phospholipid bilayer, the plasma membrane consists lipids and proteins. Another essential compound of mammalian plasma membranes is cholesterol.

Cholesterol is responsible for maintenance of cell structure and maintaining the stability and the stiffness of the plasma membrane. 30% of the cell membranes exits of cholesterol. (Groningen Biomolecular Sciences and Biotechnology Institute en het Zernike Institute for Advanced Materials van de Rijksuniversiteit Groningen, 2014)

Cholesterol is also a precursor of compounds as vitamin D, steroid hormones and bile acid Cholesterol has four rings in its structure as seen in figure 6.

Cholesterol is synthesized in the liver by mammalians self, but is also a part of some food. The cholesterol in the diet, is mostly find in eggs, meat, cheese and other dairy products. Normally 30-60% of the cholesterol in the western diet(approximately 500 mg) is absorbed by the gut. Cholesterol is a lipid molecule so it is poorly soluble in water. Lipoproteins transport cholesterol between organs and tissues. The sterol ring of cholesterol cannot be metabolized by humans, it is excreted as bile acids or as free cholesterol. Approximately 50% of the cholesterol eliminated from the body through faeces each day is excreted as bile acids and the remainder as the product of bacterial reduction of free cholesterol in the gut. The two sources of cholesterol reach cells in different ways. The intracellular free cholesterol can be newly synthesized within the cell but can also be derived from lipoproteins. the exogenous cholesterol can only be derived from lipoproteins.

Lipoproteins

Cells are taken up the exogenous(dietary) cholesterol in lipoproteins. Cholesterol is an element of lipoproteins. Lipoproteins transport fats in the body and consist of cholesterol, triglycerides, proteins and phospholipids. There are five different types of lipoproteins: Very Low Density Lipoproteins (VLDL), Low Density Lipoproteins (LDL), Intermediate Density Lipoproteins (IDL), High Density Lipoproteins (HDL) and chylomicrons. (Dominiczak, 2009)

The types of lipoproteins vary in density, apolipoprotein composition and in the amounts of phospholipids, cholesteryl esters, triglycerides, proteins and free cholesterol. The differences are seen in table 1.

chylomicron VLDL IDL LDL HDL

Density (g/ml) <0.95 0.950’1.006 1.006’1.019 1.019’1.063 1.063’1.210

Components (% dry weight)

protein 2 7 15 20 40’55

triglycerides 83 50 31 10 8

free cholesterol 2 7 7 8 4

cholesteryl esters 3 12 23 42 12’20

phospholipids 7 20 22 22 22

Apoprotein composition A-I, A-II,

B-48, C-I,

C-II, C-III B-100, C-I,

C-II, C-III,

E B-100, C-I,

C-II, C-III,

E B-100 A-I, A-II,

C-I, C-II,

C-III, D, E

Table 1. (Christopher K Mathews, 2000)

Chylomicrons

Chylomicrons consists approximately of proteins(1-2%), cholesterol(1-3%) phospholipids(6-12%) and triglycerides(85-92%). (Hussain, 2000) Their function is transport of fats absorbed by the intestinal epithelial cells by the lymphatic system and the blood to the rest of the body. Dietary fats are digested and have a role in forming the chylomicrons’ triglycerides, free cholesterol and cholesteryl esters. (Thompson, 2015)

Very low density lipoproteins(VLDL) and intermediate density lipoproteins(IDL)

The synthase of VLDL’s is placed in the liver. VLDL transports triacylglycerol to the tissue cells. Triacylglycerol’s are synthesized in the liver. VLDL is hydrolase by lipoprotein lipase on the same way as chylomicrons. VLDL is now called IDL. IDL will be hydrolyzed and transformed into LDL or will be taken up by the liver.

Low density lipoproteins(LDL)

Apolipoprotein B100 is the only apolipoprotein of LDL and binds lipoprotein particles to LDL-specific receptors. VLDL and IDL have also apolipoprotein B100 but they also contain other apolipoproteins. cholesterol in LDL is converted to steroid hormones or is used as structural component of cell membranes.

High density lipoproteins(HDL)

The excess of cholesterol in cells is removed and is returned to the liver. HDL has an important role in this process. The cholesterol which has entered the liver is metabolized to salts and bile acids and eliminated by the intestine. HDL and LDL contain together the balance of cholesterol in the body. (Thompson, 2015)

Cholesterol is converted to cholesteryl esters by HDL. This converting takes place by the enzyme LCAT which is activated by apoA-1 in HDL. Cholesteryl esters are transferred to VLDL and LDL by the apo-D protein in HDL. Apo-CII and apo-E proteins are transferred to chylomicrons and other LDL’s. Afterwards apo-E is recognised by the liver, so the remnants of lipoproteins and cholesterol can be converted to bile acids and will be excreted into the duodenum. (Dominiczak, 2009) (Mathews, 2000) (Zamora, 2016)

Decreasing of cholesterol levels

LDL enters the vascular wall very easy. The immune system takes up the lipoproteins, which get overloaded with lipids. The lipoproteins change into foam cells. When foam cells die, they release the accumulated lipids. Those lipids form pools within the vascular wall. As a result plaques arise in the vascular walls. This is called arteriosclerosis. (Dominiczak, 2009) Arteriosclerosis can lead to brain and hart infarcts and transient ischemic attacks(TIA). (slagaderverkalking, 2016)

People with too much cholesterol in their blood, can use medication to decrease the amounts of cholesterol. There are different types of medication to decrease cholesterol in the blood. Statins are medication to decrease cholesterol synthesis by the liver. Less LDL in the blood, decrease the risk of arteriosclerosis. In the experiment the effect of cholesterol at the ceramide converting in SK-N-AS and HeLa will be tested. Statins are not an effective way to decrease cholesterol in SKNAS and HeLa cells, because statins only affect the cholesterol synthase in the liver. To decrease cholesterol in SK-N-AS and HeLA cells , cells are exposed to methyl-beta-cyclodextrin. To increase cellular cholesterol SK-N-AS and HeLa cells, cells are exposed to cholesterol-methyl-beta-cyclodextrin inclusion complexes.

The effect of cholesterol at sphingomyelin synthase

Because of van der Waals interactions, Sphingomyelin and cholesterol a high affinity for each other. They are mostly located together in ‘rafts’ of sub-domains of membranes. There is evidence that sphingomyelin controls the distribution of cholesterol in cells. (Christie, 2014) In a study supplied by M.N. Nikolova-Karakashian, H. Petkova, K.S. Koumanov from the Bulgarian academy of sciences in Sophia, the link between cholesterol and sphingomyeline synthase in rat liver plasma membranes is discovered. It is proved that the SMase activity showed a strongly negative correlation between SMase activity and the Cholesterol/Protein ratio. The enzymes PC:Cer-Pch and PE: Cer-Pet transferase are sphingomyelin synthesising enzymes. Those enzymes were stimulated by increasing cholesterol in feeding. Those results support the link between cholesterol and sphingomyelin synthase. (M.N. Nikolova-Karakashian, 1992)

Conclusion

Sphingolipids are a part of the cell membrane. They have among others a role in cell differentiation and cell growth. Ceramide can be converted into other sphingolipids like sphingosine, sphingomyelin, glucosylceramide and ceramide 1-fosfate. Sphingosine 1-fosfaat affects cell growth in neuroblastoma cells(SK-N-AS). Sphingomyelin affects cell growth in cervical carcinoma cells(HeLa).

Cholesterol and sphingomyelin have a high affinity for each other, because of van der Waals interactions. Recent studies show the effect of cholesterol at sphingomyelin synthase. SMase activity showed a strongly negative correlation between SMase activity and the Cholesterol/Protein ratio. In the practical part of this research, the effect of cholesterol at the ceramide converting in other sphingolipids is discovered.

Discussion

Recent studies showed the effect of cholesterol at sphingomyelin synthase. The cells used in this study are liver cells of a rat. It is not completely sure if the effect of cholesterol at sphingomyelin synthase is the same in SK-N-AS and HeLa cells. If the practical part of this project shows the same results as this study, we still don’t know if the effect works at the same way, or that just the end results are the same

References

Aldrich, S. (2016, october 3). HELA. Retrieved from Sigma Aldrich: http://www.sigmaaldrich.com/catalog/product/sigma/93021013?lang=en&region=NL
Aldrich, s. (2016, october 3). SK-N-AS. Retrieved from sichma aldrich: http://www.sigmaaldrich.com/catalog/product/sigma/94092302?lang=en&region=NL&gclid=Cj0KEQjwg8i_BRCT9dHt5ZSGi90BEiQAItdjpJxGXogqSobYyQ3H71dsjWzEJEvsl17-r4jNuP1CebsaAlOj8P8HAQ
Anthony, H. F. (2004). the complex life of simple sphingolipids. Retrieved from NCBI: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1299119/
Christie, W. (2014, juli 2). Sphingomyelin and Related Lipids. Retrieved from AOCS lipid library: http://lipidlibrary.aocs.org/Primer/content.cfm?ItemNumber=39362
Christopher K Mathews, K. v. (2000). biochemistry.
Cremesti, A. &. (2000). Current methods for the identification and quantitation of ceramides: An overview. Retrieved from SpringerLink: http://link.springer.com/article/10.1007/s11745-000-0603-1
Dominiczak, J. B. (2009). medical biochemistry. South Carolina, Verenigde Staten: elsevier health sciences.
Fikadu Geta Tafesse, e. a. (2007). Both Sphingomyelin Synthases SMS1 and SMS2 Are Required for Sphingomyelin Homeostasis and Growth in Human HeLa Cells. The Journal of Biological Chemistry, 17537-17547.
Gault CR, O. L. (2010). an overvieuw of sphingolipid metabolism; from synthesis to breakdown. Retrieved from pubmed: https://www.ncbi.nlm.nih.gov/pubmed/20919643
GM, C. (2000). The Cell: A Molecular Approach. 2nd edition. Retrieved from The National Center for Biotechnology Information: https://www.ncbi.nlm.nih.gov/books/NBK9898/
G”mez-Mundoz, P. G. (2010). Ceramide 1-phosphate in cell survival and inflammatory signaling. Retrieved from landes bioscience and springer science + buisness Media.
Groningen Biomolecular Sciences and Biotechnology Institute en het Zernike Institute for Advanced Materials van de Rijksuniversiteit Groningen. (2014). doorbraak in onderzoek naar celmembranen. Retrieved from kennislink: http://www.kennislink.nl/publicaties/doorbraak-in-onderzoek-naar-celmembraan
Hussain, M. M. (2000). A proposed model for the assembly of chylomicrons. doi:10.1016/S0021-9150(99)00397-4

King(pHd), M. W. (2016, mei 11). Synthesis of Sphingosine and the Ceramides. Retrieved from themedicalbiochemistrypage.org: http://themedicalbiochemistrypage.org/sphingolipids.php#intro

M.N. Nikolova-Karakashian, H. P. (1992). Influence of cholesterol on sphingomyelin metabolism and hemileaflet fluidity of rat liver plasma membranes. elsevier, 153-159.
Mathews, v. H. (2000). biochemistry. Oregon, United States : Prentice Hall. Retrieved from Oregon state, online course biochemistry: http://oregonstate.edu/dept/biochem/hhmi/hhmiclasses/biochem/lectnoteskga/lecturenotes011199.html

Mehrdad Rahmaniyan, A. Q. (2012). bioactive sphingolipids in neuroblastoma,. South Carolina, US: Division of Pediatric Hematology Oncology.
Obeid, Y. (2008). principles of bioactive lipid signalling: lessons from sphingolipids. Nature Reviews, 12. Retrieved from pubmed: https://www.ncbi.nlm.nih.gov/pubmed/18216770
Pike, L. J. (2009). the challenge of lipid rafts. Retrieved from pubmed: https://www.ncbi.nlm.nih.gov/pubmed/18955730

slagaderverkalking. (2016, september). Retrieved from hartstichting: https://www.hartstichting.nl/vaatziekten/slagaderverkalking
Thompson, T. E. (2015, August). Classification and formation. Retrieved from Encyclopedia Britannica, school and library subscribers: https://www.britannica.com/science/lipid/Classification-and-formation#ref914025
Tsuyoshi Nishia, b. e. (2013, august 4). Molecular and physiological functions of sphingosine 1-phosphate transporters. Retrieved from sciencedirect.com: http://www.sciencedirect.com/science/article/pii/S1388198113001509
Voet, D. J., Voet, J. G., & Pratt, C. W. (2008). principles of biochemistry. Wiley.
Zamora, A. (2016). Lipoproteins. Good cholesterol(HDL and Bad cholesterol(LDL). Retrieved from scientific psychic: http://www.scientificpsychic.com/health/lipoproteins-LDL-HDL.html

Appendix 1. ‘ Plan of action

The Main question is: What is the effect of cholesterol at converting ceramide in sphingomyelin, glucosylceramide, ceramide-1 phosphate and sphingosine in SKNAS and HELA cells?

The cell lines SKNAS and HELA will be used. which are both genetically unmodified.

Several to be determined concentrations of cholesterol will be tested on these cells, after which ceramide, sphingomyelin, glucosylceramide, ceramide-1 phosphate and sphingosine in these cells will be qualified with TLC (thin layer chromatography), and fluorescence microscopy. Protein determination will also be used to determine whether the amount of cells per well will be accurate.

The cells will be depleted and saturated using Methyl-B-Cyclodextrin wihout cholesterol and with cholesterol respectively.

The sphingolipids: ceramide, sphingomyelin, glucosylceramide, ceramide-1 phosphate and sphingosine will be made fluorescence by NBD C6-Ceramide (6-((N-(7-Nitrobenz-2-Oxa-1,3-Diazol-4-yl)amino)hexanoyl)Sphingosine).

Needed protocols:

1-PROTOCOL – CELL CULTURE

Goal: -Starting the experiment with the right kind/amount of cells and wells with the right medium.

Principle: Cells need nutrition and favourable conditions to grow.

2-PROTOCOL – EXPOSURE OF CULTURED CELLS TO C6-NBD-CERAMIDE

Goal: -By adding c6NBD-ceramide the lipids will be fluorescent so they can be qualified.

Principle:c6NBD-ceramide will be green fluorescent after being excited by 466 nm.

3-PROTOCOL ‘ Experimental,

‘The rate of sphingomyelin synthesis de novo is influenced by the level of cholesterol in cultured human skin ‘broblasts’ (adding cholesterol)

Goal: -Adding and removing cholesterol from the SKNAS and HeLa cell lines.

Principle: Methyl-B-cyclodextrin has a central cavity in which cholesterol can be incapsulated.

4-PROTOCOL – HARVESTING OF CULTURED CELLS

Goal: -Seeing the cells through an fluorescence microscope ‘ Getting the cells out their wells and into Eppendorf vials.

Principle:c6NBD-ceramide will be green fluorescent after being excited by 466 nm.

5-PROTOCOL – TOTAL PROTEIN QUANTIFICATION

Goal: -Determine how much proteins are in each well, so qualification can be accurately adjusted to the amount of cells.

Principle: -Each cell has a certain amount of proteins so if one well has 20% less proteins than the other wells it most likely had 20% less cells as well.

6-PROTOCOL – DISRUPTION AND HOMOGENIZATION OF CELLS

Goal:-Getting the lipids in the cell membranes loose.

Principle:-Sonication uses sound energy to agitate particles in a solution.

7-PROTOCOL – TWO-PHASE LIPID EXTRACTION

Goal: -Extract the lipids from the homogenized solution

Principle:- Dichloromethane is widely used as an organic solvent, after centrifugation it will go to the bottom with all lipids.

8-PROTOCOL – LIPID SEPARATION BY THIN LAYER CHROMATOGRAPHY (TLC)

Goal: Separating the different sphingolipids.

Principle: The separate lipids each have different solubility in the solvent and different attraction to the stationary phase, so they will travel at different speeds and therefor separate.

9-PROTOCOL – PLATE IMAGING AND SPOT QUANTIFICATION

Goal:-Viewing the lipid bands on the TLC plate.

Principle:c6NBD-ceramide will be green fluorescent after being excited by 466 nm.

10-PROTOCOL – SILICA EXTRACTION OF NBD-LABELED LIPIDS

Goal:-Getting a more accurate reading on the amount of NBD-labelled lipids.

Principle:-The lipids are separated on the TLC plate so if you scrape it off and measure it using a plate reader it will give the amount of fluorescence and so information on the individual amount of NBD-labelled lipids.

Design of experiment:

Day 1.1: Washing cells with Methyl-B-Cyclodextrin (Depleting cells of cholesterol)

Day 1.2: Adding NBD C6-Ceramide (Colouring sphingolipids)

Day 1.3: Adding cholesterol using Methyl-B-Cyclodextrin+Cholesterol (saturating cells with cholesterol)

Day 2?.1: Harvesting cultured cells , fluorescence microscopy

Day 2?.2: Disruption and homogenization of cells

Day 2?.3: Two-phase lipid extraction

Day 2?.4: Total protein quantification

Day 3?.1: Lipid separation by Thin layer chromatography

Day 3?.2: Plate imaging of TLC plate

Day 3?.3: Extraction of NBD-labeled lipids from the TLC plate ‘ fluorescence measurement

Tabel 1: Cholesterol concentrations in SKNAS and HeLa cell lines (duplo)

Cholesterol (ug) untreated Only washed x x x x

1SKNAS(ml) 2 2 2 2 2 2

2SKNAS(ml) 2 2 2 2 2 2

3HeLa(ml) 2 2 2 2 2 2

4HeLa(ml) 2 2 2 2 2 2

Requirements materials and equipment per protocol:

1)

tissue culture cabinet
phosphate buffered saline (PBS)
trypsin-EDTA solution
DMEM (5% FCS), including penicillin/streptomycin
sterile blue-capped 15 ml tubes
centrifuge
p1000 pipette
sterile 1000 ”l pipette tips
sterile 25 cm2 cell culture flasks
incubator
phase contrast microscope

2)

Cell culture facilities (flow cabinet, incubator, sterile pipettes etc.)
Phosphate buffered saline (PBS)
DMEM (5% FCS) (including penicillin/streptomycin)
1 mM stock solution of C6-NBD-ceramide in ethanol Preparations by students
Cultured cells in 25 cm2 flasks at 40 to 70% confluence. (HeLa,SKNAS)

3)

Methyl-B-cyclodextrin (10 mg)
Cholesterol (25 mg)
50 ml tube (6)
Medium DMEM+p/s+MEM-NEAA+fbs 10% (60 ml)

4)

phosphate-buffered saline (PBS)
icebox with ice
cell scraper
15 ml tubes
2 ml vials
Eppendorf centrifuge with cooling unit

5)

bicinchoninic acid solution (BCA)
4% (w/v) copper (II) sulphate solution
bovine serum albumin (BSA) solution (1 mg/ml)
BioRad model 680 spectrophotometric plate reader

6)

sonifier (Hielscher UP100H)
Dounce
Ice box with ice
Pre-cooled phosphate-buffered saline (PBS)

7)

p1000 pipette
vortex
Eppendorf centrifuge
sample concentrator
nitrogen (10 liter bottle), including an adjustable pressure reducer
waste tank for organic solvents
dichloromethane (DCM)
methanol (MeOH)
DCM/MeOH (1:2 by vol.)
Millipore water (MQ)

8)

TLC tank (including lid)
filter paper
glass cylinder with measuring units
dichloromethane (CH2Cl2)
methanol (CH3OH)
25% ammonia (NH4OH)
glass TLC plates (20 x 10 cm kieselgel 60) – Merck # 1.95626.0001
soft carbon pencil (4B)
vortex
p20 and p200 pipettes
extra fine p20 tips
hair dryer
UV tray
iodine grains
liquid waste tank for halogen-containing organic solvents.
Solvent mix: dichloromethane (CH2Cl2) / methanol (CH3OH) / 25% ammonia (NH4OH) (60:25:1, by volume). Preparations (by students)

9)

BioRad ChemiDoc XRS+ Molecular Imager station

10)

Soft carbon pencil (4B)
1.5 ml Eppendorf vials
Vortex
Aluminum foil
Curved surgical blades + holders
Black flat-bottom microtiter plates
70% ethanol

Splits within the women’s suffrage movement

Introduction

The women’s suffrage movement and the abolitionists used to work together towards the same goal: suffrage and enfranchisement, or in other words full citizenship. But after the Civil war there was a split both within the women’s movement and between the abolitionists and the woman’s suffrage movement. Part of the women’s movement gave up their support of black suffrage and the other part kept supporting black suffrage. But mostly, both movements now pursued their own interests, often at cost of the other. A big part of the women’s movement became increasingly racist. This essay will focus on this split within the women’s movement; why it happened, who was involved, and what the consequences were.

This essay is based on secondary literature and will contain several perspectives on the event that is described. Alma Lutz and Lois Banner give us the perspectives of the ladies involved, Garth Pauley shows W.E.B Du Bois’ vision on women’s suffrage and the alienation of the black men and the white women’s suffragists movement, and Philip Cohen sheds a light on the influence of nationalism on the women’s movement.

The thesis that I adopted was that the women’s suffrage movement changed their stance on black suffrage mostly because of envy and spite. When African Americans were granted full citizenship (including voting rights) the women’s movement was disappointed and angry for being left behind. They had always supported the abolitionist movement. But were now ignored and left behind. So this essay will be focussing on this premise and will try to determine if this assumption is true or not. So the main questions that are being answered are: what was the cause of the split? What did the split mean? And what were the consequences?

The first part will focus on the problem with the actual conflict and thus the cause of the split within the movement. The second part of this essay focusses on specific examples of women that changed their stance on black suffrage. These examples are the cases of Susan B. Anthony and Elizabeth C. Stanton, both women that were important within the Women’s Suffrage Movement and both women they became openly racist after the Civil War. The third chapter will focus on the consequences of this split within the movement.

The Conflict

In the article written by Garth E. Pauley the author talks about W.E.B. Du Bois vision on women’s suffrage. This is an interesting vision on the issue because it shows the other side of the story, and it also gives a nicely summarized explanation as to what occurred in these tumultuous times. According to Du Bois (not to be confused with E.C. Dubois) blacks and white suffragettes initially struggled together to win vote. But when it became clear that only African American men would be enfranchised, many white suffragists spoke out against the Fourteenth and Fifteenth amendments. White suffragists argued that the absence of women’s suffrage prevented them from supporting the amendments, and many white suffragists used racist arguments to support their claim. These arguments cut to the bone of the black suffragists and made clear that the principled collaboration between black and white had been a facade all along, and it was suggested that the collaboration had always been about political advantage and not about principle. 1

The conflict surrounding the Fourteenth Amendment (which was passed in 1868) was one of the first incidents that divided black men and white suffragists, because it included the word ‘male’ meaning that women were now officially excluded from this enfranchisement. The Woman’s Rights Association had only just morphed into the American Equal Rights Association (AERA), but tensions within the organization were already rising. Wendell Phillips and Theodore Tilton suggested turning away from woman suffrage for the moment. This was met with outrage by Susan B. Anthony and Elizabeth Cady Stanton. Other women’s suffrage advocates accepted the Fourteenth Amendment with the idea that both blacks and woman suffrage were important but that the nation would only accept one reform at a time. Stanton and Anthony’s ideas about women’s suffrage alienated many African Americans, including the important black leader Frederick Douglass. He had been a strong supporter of women’s rights but he believed in the priority of black suffrage and worried that if African Americans did not take this chance during this crucial hour, they might not get it again. 2

Although the conflict between black men and white suffragists started with the Fourteenth amendment the real division occurred during the debate about the Fifteenth Amendment (that was passed in 1870). This Amendment prohibited discrimination in voting rights of citizens on the base of race, colour or previous condition of servitude.3 Again women were excluded from the attained privileges. The debate about the Fifteenth Amendment divided suffragists into three parties: Those who thought it unwise to try to add women’s suffrage (mostly including the old, original abolitionists). Those who thought every effort should be made to try to include women, but if this was impossible the amendment should still pass (this group included people like Lucy Stone and Henry Blackwell), and those who thought that if the amendment did not include women it should not be passed (this group was headed by Stanton and Anthony). The peak of the conflict was reached at the AERA meeting May 1869 where Stanton made racist comments and stated that she did not believe ‘in allowing ignorant negroes and foreigners to make laws for her to obey’.4 Other white suffragists like Lucy Stone and Julia Ward did accept Fifteenth Amendment. At the end of the meeting the AERA split into two parties: the ‘National Woman Suffrage Association (NWSA) that focused on woman suffrage and refused to support the Fifteenth Amendment if it did not extend the vote to women. And the American Woman Suffrage Association (AWSA) that was in favour of universal suffrage. These members were prepared to (temporarily) support black suffrage while there was an opportunity for success. 5 Around this time many white suffragists accepted an ideology of white supremacy.

According to Kraditor three developments led to the women’s movement support of racist ideas: The first development is that more racist women joined the movement when it became more popular. The second development according to Kraditor is that many abolitionists of the old guard accepted the new ideology and came to the conclusion that black suffrage and women’s suffrage were entirely unrelated things. And finally the Southern white women began to build suffrage movement on the idea that women’s suffrage would ensure white supremacy in the South, this movement focused on ‘strategies of expediency’ and many members were willing to accept the accompanying racist views if this meant enfranchisement of women.6 Even Susan B. Anthony, who had been a fervent opponent of racism, was willing to go along with this movement’s ‘Southern Strategy’. She was willing to tolerate racism if it meant gaining the right to vote and became increasingly more racist as time went by.7

To sum up the two primary causes of the alienation between blacks and white suffragists were the conflict surrounding the ‘Negro’s Hour’, the struggle for universal suffrage that resulted in suffrage only for black males, and the ‘Southern Strategy’, the decision by many white suffragists to ignore the race issue to win Southern support for their cause, which led to racist arguments by many of the white suffragists.8

In his article Philip N. Cohen talks about the fact that the women’s suffrage movement underwent a shift in it’s core ideas. Before and during the Civil War women’s suffrage and abolitionism went hand in hand. African Americans and suffragettes were both fighting for the same thing; equality and the right to vote. The women’s rights activists collected signatures for the Thirteenth Amendment to abolish slavery. Their beliefs stressed the equality of men and women and challenged the general idea of separate gender spheres. After the 1860’s this idea lost its appeal in the suffrage movement and by the 1880’s the new ideology was based on women’s distinctive nature and special roll in social reform based on differences. This was later known as ‘essentialism’. This new philosophy stressed the the differences between men and women. This difference was, according to the suffragists, essential to “counter-act the excels of masculinity that is found in unjust and unequal laws”.9 This form of essentialism sacrificed any principle (most importantly, suffrage for former slaves) to attain voting rights for white women. There was also still a part of the movement that still held on to the principal of Natural Rights. This part of the movement still identified more closely with abolitionism and more frequently included non-white women. 10

Women’s and blacks’ voting rights were pitted against each other more and more often. An increasing number of states accepted black suffrage, but the women’s movement made no progress. After the Fourteenth Amendment was accepted the number of abolitionist feminist greatly declined. This Amendment granted explicit rights to black men, which meant that the white women’s demands were ignored. This dealt a crucial blow to the natural rights perspective in the women’s movement. Prominent members of the women’s movement, like Elizabeth Cady Stanton, had planned for the white women to follow the black men in the gaining of rights. But this didn’t happen, so Stanton and other suffragettes denounced the Fourteenth Amendment as a ‘desecration’. 11 This fallout revealed that the alliance between abolitionism and suffragists had mostly been tactical. The women’s movement mostly turned its back on the abolitionists and the new suffrage arguments contained a strong theme of race antagonism. 12

Buechler follows the career of Elizabeth Boynton Harbert. She was a leader during the period of transition from equality to diffence-based arguments. One of the changes that took place was that the women’s movement changed the wording of its arguments. ‘The beneficiary of women’s suffrage was less often construed as women themselves and more often couched in abstract terms like society, the nation, the race, and civilization’.13 Voting rights would serve not only the interests of women but would serve the urgent needs of the nation as a whole. Women’s votes represented ‘loyalty, virtue, wealth, and education’ this was needed ‘to outweigh the incoming tide of poverty, ignorance, and vice that threatens our very existence as a nation,’14 According to Stanton allowing black men to vote elevated the ‘lowest orders of manhood’ over the ‘highest classes of women’. Susan B. Anthony agreed with her, arguing that ; intelligence, justice and morality are to have precedence in the Government and therefore the question of ‘woman’ should be brought up first and that of the negro last’.15

In the most extreme form, the NASWA (National American Woman Suffrage Association) called for a restriction of black suffrage. This shift from the notion of inherent natural rights for women and against blacks rights is essential. Because not only do white women exclude black women from the movement, but white women’s suffrage organizations now actively pursued a nationalist gender alliance. The women’s movement now sought unity with white men (and their nation) rather than non-white women and men.16 Stanton placed the women’s movement within the historical pattern of white Republican egalitarianism that was paired with exclusion of non-whites from the ‘national family’, when she referred to white women as ‘women of the Republic’ in 1866.17

According to Paula Giddings the shift in the women’s suffrage movement during the post-Reconstruction era was mostly due to practicality. There was an upcoming trend towards restricting black voting rights and the women’s movement mostly went along with it. This shift created a space in which white women sought to justify who should have the vote and why, rather than emphasize a truly universal suffrage.18 At the same time E. Dubois argues that the Fifteenth Amendment brought a nationalist edge to the suffragists argumentation because it transferred control over the right of suffrage from the state to the national level. Therefore suffragists had to make their case one of national importance.19 The women’s movement found out that enfranchising blacks would only promise them partial (Republican) support and a smaller advantage than enfranchising the woman. This would ‘uplift the nation at its very heart, the family’.20 This trend that moved toward essentialist feminism, that focused only on women’s suffrage and was deliberately more nationalistic, shaped the white women’s movement as a force of nation building. The women’s movement advocated that national domination could not be complete or successful without the voting citizenship of white women.21

Buechler considers the fact that in late nineteenth century the women’s suffrage movement’s arguments ‘reinforced rather than challenged dominant notions about sex and gender’ a paradox.22 But that this reinforcement of some aspects of dominant gender relationships at that moment served as a strategy to serve the interests of the very specific women that led the suffrage movement. This form of essentialist feminism focused on the complementary partnership between white men and white women rather than emphasizing the conflict on the issue of black suffrage. This strategy was undeniably useful to the women who pursued it, but in the long run it may have undermined the struggles for gender equality.23 So what we see here is a strategic shift of ideology to benefit the white women’s suffrage movement. This shift sacrificed any other form of principle to obtain voting rights for women, which meant a split between the women’s rights movement and the black suffrage movement.

Eleanor Flexner adds to the discussion that Stanton believed that accepting the Fourteenth Amendment would set woman suffrage back a full century. The indignation of Anthony and Stanton knew no bounds. Stanton warned that the Republicans’ advocacy of manhood suffrage would culminate fearful outrages on womanhood, especially in the southern states. Flexner also adds that Stanton and Anthony thought that what was often called ‘the Negro Hour’ could also be the women’s hour and that both ladies were afraid that this opportunity might not recur in a lifetime. According to Stanton and Anthony it would have been so easy to include the word ‘sex’ in the Fifteenth Amendment, but Flexner argues that they failed to see that such a step was still far ahead of practical political possibilities as the debate about women’s suffrage had not yet been around long enough to make any practical changes. The division within the suffrage movement was, according to Flexner, unfortunate but inevitable during the 1870’s and the 1880’s as this was a period of intense economic development and change during which social forces polarized in midst of widespread unrest. This break would continue until one of the trends, respectability or radicalism, became the dominant form. In the meantime there would still be victories and the area of politics would eventually be breached. 24

Susan B. Anthony and Elizabeth Cady Stanton

Elizabeth Cady Stanton was born in the Burned Over district of New York, this had been a centre of reform activities for a long time. Her family connections and lively intellectual curiosity made her a participant in these movements.25 As an activist Stanton was first and foremost a feminist. She became an anti- slavery activist only when the Civil War broke out. This was in contrast with other important women’s rights activists like Lucy Stone and Antoinette Brown, whose abolitionism came before their feminism and this would remain their main commitment. Her feminism was strengthened by her love for speculation which led her to a search of the underlying principles of human and social behaviour. 26

According to Clara Bewick Colby the women’s movement took a definite form of specific and organized demands in Stanton. Her personality ‘won for the woman’s cause the ear of the world’, and Colby claims that ‘it is not too much to claim that the condition of all women has been modified, improved, or given new trend because of the movement which Stanton was the embodied will and purpose’. 27

Susan B. Anthony thought striving for liberty and for a democratic way of life was a noble tradition and followed in this tradition. She devoted her life to the establishment of equal rights that according to her had to be expressed in the laws of a true republic. She recognized an extreme violation of this principle of equal rights in black slavery and the legal bondage of women so she became an active, courageous, and effective antislavery crusader and one of the most important civil and political rights activists for women. She saw the woman’s struggle for freedom from these legal restrictions as an important phase in the development of American democracy. To her this struggle was not a battle of the sexes, but a battle that anyone would fight for civil and political rights. While her goals for women were only partly realised by the time she died, she was still a crucial factor in the acceptance of her federal suffrage amendment and also in the worldwide recognition of human rights. 28

Throughout her career the origins of Stanton’s ideas lay in both her extensive reading and her own experience. Her ideas were intertwined with her autobiography; her life was her primary source of ideas, and this in turn influenced her actions. The essential lines of her thought were fully developed by the 1860s and these remained the same until the last two decades of her life. Her dedication to feminist individualism was a constant theme, as was her belief in the efficiency of education and the superior value of coeducation over single-sex education. She never abandoned her support of woman suffrage or her belief that reform in marital relationships was the key to human progress.29

Before and during the Civil War Susan B. Anthony was one of the most important people within the abolitionist movement. She was actively involved in the abolitionist movement. She held antislavery meetings, made speeches and distributed leaflets whenever and wherever possible and thus seemed to care a great deal about the cause of black suffrage and equality. 30 Although she was actively involved in the abolitionist movement, she did not forget women. She called attention to the fact that the nation had never been a true republic because the ballot was exclusively in the hands of the ‘free white male’. She asked for a government ‘of the people’, men and women, white and black, with Negro suffrage and woman suffrage as basic requirements. This speech was met with great enthusiasm by the Republicans. This enthusiasm was so great that the Republicans urged her to prepare it for publication but they suggested that she delete the passage on woman suffrage. For Anthony, this was the first indication that Republicans might balk at the idea of enfranchising women. Both Elizabeth Cady Stanton and Anthony had come to expect that the ballot of women’s enfranchisement would be given as a reward. Because the contribution of women to the winning of the war had been so great that Republicans were indebted to the women for creating the sentiment for the third amendment. But it became more and more obvious that politicians were shying away from woman suffrage. This filled Anthony with great despair, for she firmly believed that women who had been asking for full citizenship for seventeen years deserved to be a higher priority than the Negro. Stanton agreed with Anthony on this point. To them black suffrage without women’s suffrage was unthinkable and an unbearable humiliation. They thought that women were better qualified for the ballot than the majority of the black people who were illiterate because of the years of slavery and thereby an easy prey for unscrupulous politicians. They argued that enfranchising blacks that if there had to be a limitation on suffrage it should be on the base of literacy, not on the basis of sex. 31

Throughout the entire country, people were thinking about the Constitution as had not happened since the Bill of Rights. There were several amendments up for discussions, rebel states were being reintroduced into the Union with entirely new constitutions and Northern constitutions were being revised. According to Anthony this was the perfect time to proclaim equal rights for all. This was to be the woman’s hour.32

But with the introduction of the Fourteenth Amendment came a great disappointment. Anthony found that the House of Representatives had written the word ‘male’ into the new resolution as one of the qualifications of voters. Anthony and Stanton agreed in their discussion that they needed to create an overwhelming demand for woman suffrage in this crucial time. The women set to work to gather as many of the activists as possible, which was a challenge because they had scattered over the years. Several agreed with Anthony that Congress had to be petitioned immediately to enfranchise women either before or at the same time that blacks were granted the right to vote. Anthony quickly found out that by pressing for woman suffrage, she was estranging many abolitionists. But Anthony and Stanton were determined that a petition for women’s suffrage go to Congress, so they went ahead undeterred. 33

Anthony came to realise that the two powerful Republicans, Senator Sumner (the senator of Massachusetts) and Thaddeus Stevens, were going to devote themselves to blacks suffrage completely, even though both of them were friendly to women’s suffrage. This meant that the extension of Sumner and Stevens’ party would follow them which meant the women’s movement could not expect help from any lesser party members. Therefore the only alternative was to appeal to the Democrats and maybe an occasional recalcitrant Republican and Anthony would have nothing stand in her way. Anthony found several supporters within the Democratic party that wanted to present her petitions. The reasons varied. Some saw justice in the demands of the women’s movement, others thought white women should have precedence over blacks, and some saw support of women’s suffrage as a way to spite the Republicans. During 1866 petitions for woman suffrage with several thousands of signatures were presented by Democrats and some irregular Republicans. This collaboration could then be seen as a success for the women’s movement. Still this did not end in significant progress. 34

Both women followed Theodore Tilton (a popular newspaper editor at the height of his popularity) in the idea to merge the American Antislavery Society and the women’s rights group and form an American Equal Rights Association that would fight for woman and black suffrage. He suggested it be led by the well-known abolitionist (and early ally of Anthony) Wendell Philips. Anthony trusted that both men would handle the process, not even suspecting that they would ever oppose their ideas. But Anthony and Stanton did not have to wait long for their wake up call. During a meeting with Wendell Philips and Theodore Tilton about a plan for their campaign, Wendell Philips declared that ‘the time was ripe for striking the word “white” out of the constitution, but not the word “male”’. He went on that the question of striking out the word “male” was present in the association as an intellectual theory, but it was not seen as a practical thing to be accomplished by this convention. Anthony was outraged by this as she was completely unprepared for this attitude on Wendell Tilton’s part. She stated that she would rather ‘cut off my right hand than ask for the ballot for the black man and not for woman’ and swept out of the office. Stanton stayed to try to heal the split, but to no avail. When Anthony returned tot the Stanton home they both vowed then and there that they would devote themselves with all their might and main to woman suffrage and tot that alone. 35

Consequences

Suzanne Marilley argues that to succeed the suffragists had to ‘adapt goals for social change to the reform options available in the American political system’ and ‘put the reforms in appealing packages.’ This was, according to Marilley, only possible the movement agreed to disagree about all issues besides suffrage; they had to make the vote their single issue. This single issue approach allowed ‘the formation of a coalition that included prohibitionists, racists, anti-child labour reformers, Republicans, and Democrats but left the suffragists in control’.36 Without such coalition, which gained the support of states like Texas, Tennessee and Arkansas, the Nineteenth Amendment (granting voting rights to women) would have probably never been ratified. 37

Buechler does not appear to agree with this idea. According to him this form of essentialist feminism focused on the complementary partnership between white men and white women rather than emphasizing the conflict on the issue of black suffrage. Buechler says this strategy was undeniably useful tot the women who pursued it, but in the long run it may have undermined the struggles for gender equality. Unfortunately this point is not further elaborated in the article, but its suggested that Buechler thought that this new strategy hindered further progress rather than help it like Marilley suggested. Cohen seems to mostly agree with Marilley (although maybe not to the same extent). This could be concluded from the fact that he sees the shift in ideology as a strategic move to gain white followers.

Among other conclusions, Cohen concludes that by the use of difference-based feminism the suffrage leaders allowed movement to make a more credibly nationalist claim. This made feminism an acceptable part of a national movement and ideology. This approach helped convince male politicians and voters that white women’s votes would serve the nation by complementing rather than challenging men’s role. Cohen calls this “gender alliance building’. He emphasizes that although this alliance advanced white women’s suffrage, it also contributed to the oppression of non-white women and men who were excluded from the alliance and it reinforced women’s separate and subordinate role in political rights by emphasizing the separate spheres. Cohen concludes that by acting in their own interests (and working against non-white women) white women may benefit from alliances with white men, but women of other groups will still be victims of the white women’s dominance if they continue to accept the claims that white women serve the good of all women (which evidently is not the case).38

W.E.B. Du Bois thought that woman suffrage would not have any real benefits for African Americans as a race or for black women, but he still supported it. Du Bois mostly talks about the direct consequences of the conflict within the women’s rights movement that resulted in the alienation of the black men and the white women, whom had previously worked together.

This change in ideology also had several direct consequences. Like the direct alienation of African Americans from the women’s movement, the nationalistic edge the suffrage movement got, the higher exclusivity of the women’s movement and of course the acceptance and increase of racism within the movement. Racism also increased within the movement because the new ideology attracted a new, more racist, type of woman to the movement. Which then again led to an increase of racist ideas. Though one might say that this change of belief created a stronger unity within the white community in the South.

There are also debatable consequences. It is hard to say for certain, but it is highly likely that this split in the movement and the new ideology had consequences for racism in the United States. And it is possible that because of this change in ideology racism lasted longer than it could have had the woman suffragists not turned their back on black suffrage.

Conclusion

The thesis originally adopted was that the women’s suffrage movement changed their stance on black suffrage mostly because of envy and spite. The research showed that there definitely was an undertone of anger present within the movement. This can be found in the recollection of Susan B. Anthony and Elizabeth Cady Stanton’s experiences. It was obvious that both ladies were extremely angry and disappointed when neither the Fourteenth or the Fifteenth Amendment enfranchised women. So in that light the thesis was correct. But there is also another part of the story.

Cohen showed that this was not just a reaction but a choice. Both Stanton and Anthony were prepared to do whatever was necessary to get woman’s suffrage. The women’s movement shifted from idea of the enfranchisement of all as equals to ‘essentialism’. This philosophy stressed the differences between men and women and was according to the suffragists essential to counter act excels of masculinity. This form of essentialism also sacrificed any principle to attain voting rights for whit women and was meant to form an alliance with the white male population instead of the black population. It emphasized nationalism and white supremacy.

Gidding says the shift was also due to practicality. There was an upcoming trend towards restricting the rights of black people and women’s movement went along with it to get on the good side of the white male population. The shift could also be explained (like Du Bois says) by the fact that the Fifteenth Amendment made suffrage the responsibility of the nation instead of the state. The women’s movement responded by making their movement more nationalistic. This is an argument in favour of the conscious choice of changing the women’s movement’s ideology. Buechler also agrees with the fact that the reinforcement of the dominant gender rules and the abstraction from black suffrage was a strategic move to benefit the white women’s suffrage movement.

Du Bois informed us that the primary causes of the alienation between blacks and white suffragists were the conflict surrounding the ‘Negro’s Hour’ and the ‘Southern Strategy’ and gave a different perspective on the matter and repeated that the ‘Southern Strategy’ of ignoring race issues was mostly due to frustration with the exclusion of women from Fourteenth and Fifteenth Amendment. Which is mostly an argument in favour of the feelings of resentment that the suffragists felt.

In the chapter that looks in to Susan B. Anthony and Elizabeth Cady Stanton’s point of view. It becomes clear that both women were certainly angry and disappointed. So much that they agree to do whatever was necessary to attain women’s suffrage. So it can be concluded that feelings of hate and resentment certainly had a place in the split of the women’s movement and the change in the ideology. But as previously stated, and this argument is supported by most of the material that was used, the shift in ideology was mostly a conscious effort to get the white male voters on their side and it was a practical solutions. Like they said: ‘by any means necessary’.

As a final note I have to add that my research is, of course, limited. I did not have access to all the material I wanted to use, because it unfortunately wasn’t available to me. Further research into this subject would be welcome to dive further into the subject, give a wider view of the events that took place and the reasons behind it, and could do more research on the subject of the consequences of the change in ideology for the black population.

Literature

Banner, L. W. Elizabeth Cady Stanton : a radical for woman’s rights (Boston,1980)
Buechler, S.M., The Transformation of the Woman Suffrage Movement: The Case of Illinois, 1850- 1920 (New Brusnwick, 1986)
Buhle, M.J., and P. Buhle. The Concise History of Woman Suffrage: Selections from the Classic Work of Stanton, Anthony, Gage, and Harper (Urbana, 1978)
Cohen, P.N., ‘Nationalism and Suffrage: Gender Struggle in Nation-Building America’, Signs, Vol. 21, (1996) p. 707-727
(Speech before the Jidciary Committee of the New York Senate, May 1867, Stanton Papers, Manuscripts Divisian, Library of Congress; Cohen, ‘Nationalism and Suffrage’)
Cott, N.F., The Grounding of Modern Feminism (New Haven, 1987)
Dubois, E.C. Feminism and Suffrage: The Emergence of an Independent Women’s Movement in America, 1848-1869 (London, 1947)
Feimster, C.N., Southern Horrors (Cambridge, 2009)
Kraditor, A. S., The ideas of the woman suffrage movement, 1890-1920 (New York, 1965)
Lutz, A., Susan B. Anthony: Rebel, Crusader, Humanitarian, (Boston, 1959)
Marilley, S., ‘Towards a New Strategy for the ERA: Some Lessons from the American Woman Suffrage Movement’ Women and Politics, 9(4) 23-42
Norton, M.B. and R.M. Alexander, eds. Major Problems in American Women’s History. (D.C. Heath and Company, Lexington, MA, 1996).
Pauley, G.E.,’WEB Du Bois on Woman Suffrage: A critical analysis of his writings’, Journal of black Studies, Volume 30, No. 3, p. 383-419

2016-1-18-1453119860

Gender Differences in Using Language, Dialect Variation And Children Genders in Language Acquisition

I. INTRODUCTION

Does gender difference influence the language? Firstly, the language must be defined in order to answer the question. People do not care how the language works, how it affects our relationships, that is, people do not notice the power of language. It is an indispensable means of communication for people and language is used as a means of mutual communication between people. “Language is defined as an advanced system of voices that allows each community to be transferred to others by the help of common rules shaped by their own characteristics in terms of emotion, thought and desire: sound, form and meaning” (Korkmaz, p. 2). It is sometimes a living thing that shows some improvement in various reasons due to its own internal structure.

In sociolinguistics, gender differences have an important place because men and women are two concepts that are cited together with sociolinguistics. As the male and female languages are different from each other, their language formations will also show differences. The difference in the use of language by men and women can be attributed to the physical structure of their formation. However, some researchers have also found that this information is opposite. “It has been seen that there is no connection between the physical or other characteristics of the person or objects to whom this name refers in a language examination to a name” (Konig, 1992, p. 25). “Lyons ‘species have many bases indicating that gender classification is based on a ‘ natural ‘ meaning, but it is not sexual and Wardhaugh says that “those who discriminate are those who use language, and that there is no such thing as sexuality on the ground” (Konig, 1992, p. 25). The purpose of this study is to question whether gender differences can affect the language used and whether both physical and social factors can change dialect variation.

“Men usually have to undertake more pressure than women in life and the differences in job skills may be explained in great part through differences in the ways by which they are raised” (Xia, 2013, p. 1485).Women are more effective in controlling communication than men in communication with men, and sometimes women can influence communication not only by using language but also by body language. “Within the social sciences, an increasing consensus of findings suggests that men, relative to women, tend to use language more for the instrumental purpose of conveying information; women are more likely to use verbal interaction for social purposes with verbal communication serving as an end in itself” (Newman, Groom, Handelman, & Pennebaker, 2008, p. 212).The importance of this study is to increase awareness language learners’ gender differences in using language, dialect variation and children genders in language acquisition.

II. LITERATURE REVIEW

a. Gender Differences in Using Language

All societies have diversity in language use. It is a fact that most people, both women and men, who use language vary in their opinion. The variability in language use of even a person in daily life and in formal life may reduce the possibility of non-diversity between women and men’s conversations. Socialization patterns, the environments in which people live, and even the difference in political opinion can be affect women’ and men’ speaking styles (Louazani, 2015). In this study, the using language of some men and women will be examined together with some researches.

“At a discourse level , men are more likely to use familiar forms of address even where reel status of speakers suggest that a formal, impersonal tone is more appropriate and the women are more likely to initiative conversations, they succeed less often because males less willing to co-operate . In tag questions, women lead to use tag questions more frequent; men are more likely to use commands when women use them more likely to be interrogatives” (Louazani, 2015, p. 25).

Some stereotypes of male and female speech.

Stereotypes of male speech:

Use deeper voices/ lower in pitch.
Swear and use taboo language.
More assertive in group interaction (interruptions, few tag question).
Topics are « traditional » male topics like business, politics, economics.
Use non-standard speech, even middle class.
Use explicit commands (gimme the pliers).

Stereotypes of female speech:

Minimal responses: mhm, yeah, mmmmm.
Talk more than men.
Use more tag questions.
Use more interrogatives.
Use more hedges (sort of, kind of).
Use more super polite speech: would you please

(Louazani, 2015, p. 26).

Although the roles of women are seen as traditionalized housewives, they are more informed than many areas men and can be considered to have used language more precisely because of their interrogative personality (Newman, Groom, Handelman, & Pennebaker, 2008). It is also stated that as the majority of primary school teachers are women, they play a leading role in standardizing language norms in society. The differences in the daily speech of the people were investigated and the information on the gender differences in language was further clarified. “Mirroring phrase-level findings of tentativeness in female language, women have been found to use more intensive adverbs, more conjunctions such as ‘but’, and more modal auxiliary verbs such as could that place question marks of some kind over a statement unlike women, men have been found to swear more, use longer words, use more articles, and use more references to location” (Newman, Groom, Handelman, & Pennebaker, 2008).

Differences intonation

Intonation can determine people’s communicative goals, intonation can be independent of the speaker to help explain what is desired, and it can be more effective in conveying the message that is desired to be conveyed through toning. “Women often like to speak in a high-pitch voice because of physiological reason, but scientists point out that this also associates with women’s ‘timidity’ and ‘emotional instability’ ” (Xia, 2013, p. 1485). Men, unlike women, may not use more determined and curious tones.

“Example: Husband: When will dinner be ready?

Wife: Around six o’clock..

The wife is the only one who knows the answer, but she answers her husband with a high rise tone, which has the meaning “will that do”. This kind of intonation suggests women’s gentility and docility. The husband will surely feel his wife’s respect.” (Xia, 2013, p. 1485). In table 1, the data of researches are presented to the reader.

Table1. The pitch data of stressed syllable, nuclear pitch accent of female and male samples.” (Jiang, 2011, p. 975).

Sentence

type Stressed

Syllable

(F) Pitch

accent

(F) Stressed

Syllable

(M) Pitch

accent

(M)

1-Decl 7.433 11.147 3.275 2.218

2-dec-q. 9.539 14.509 5.767 6.310

3-yes/

no-q. 10.544 13.224 8.539 4.381

4-wh-q 10.415 13.650 7.025 4.396

5-excl. 9.638 12.718 7.336 4.160

F=Female; M=Male

Some researchers have also pointed out that when the women do not trust what they say, and when they feel more emotional, the tone of voice can fall, and men can also be sure that their tone is lower than they are. Also the difference between men and women at the pitch of the tone of voice is obvious in Table 1.

Differences in Manners

Women are more emotional in their speech, so women use more polite words with other speakers. Because of their kindness in their conversations, words such as ‘please’, ‘sorry’ etc. are used more often by women (Xia, 2013). “Besides this, women also show that they are reserved when they talk and the following table is based on the research of Zimmerman and West on the interruptions men and women made in a conversation” (Xia, 2013, p. 1487). It is stated that men and women talk more than men in the difference of speech.

Table 2. Interruptions during the conversation

Male female Total

interruptions 46 2 48

It is seen that, contrary to expectations in this study, men are more impatient than women because of their desire to speak themselves in the conversation, and that men are more interruptive than women when speaking both men and women. In addition, it is seen that women are not disturb others during the conversation and that women encouraged other speakers to talk.

Differences in Vocabulary

It has been noticed that women use different words in order to show their feelings differently from men’s point of view. “We can notice that men and women tend to choose different words to show their feelings, for example, when a woman is frightened, she usually shouts out, “I am frightened to death”! If you hear a man says this, you’ll think he is a coward and womanish” (Xia, 2013, p. 1486). These differences were examined in the selection of color words, the use of adverbs, adjectives, diminutives and pronouns.

Color Words

“There is special feminine vocabulary in English that men may not, dare not or will not use and women are good at using color words that were borrowed from French to describe things, such as mauve, lavender aquamarine, azure and magenta, etc., but most men do not use them” (Xia, 2013, p. 1486).

Women often use language more elegantly than using their own language and differences in language use are revealed.

Adjectives

“In everyday life, people can notice that women like to use many adjective, such as adorable, charming, lovely, fantastic, heavenly, but men seldom use them. When a woman leaves a restaurant, she will say “It’s a gorgeous meal.” If a man wants to express the same idea, he may only say, “It’s a good meal.” Using more adjectives to describe things and their feelings can show that women are more sensitive to the environment and more likely to express their emotions with words, which makes women’s language more interesting than men’s sometimes” (Xia, 2013, p. 1486).

Men do not tire themselves to explain indirectly, so instead of using elaborate adjectives like women, they use the language more simply, clear and fluently.

Adverbs

“There are also differences in the use of adverbs between men and women. Women tend to use such adverbs like awfully, pretty, terribly, vastly, quite, so; men like to use very, utterly, really and In 1992, Jespersen found that women use more so than men do, such as, “It was so interesting” is often uttered by a woman” (Xia, 2013, p. 1486).

Small details can be exaggerated by using adverbs while describing events in language use by women.

Diminutives

“Women like to use words that have the meaning of ‘small’, such as bookie, hanky. They also like to use words that show affections, such as dearie, sweetie. If a man often uses these words, people will think that he may have psychological problem or he is not manly. Furthermore, women like to use words that show politeness, such as please, thanks, and they use more euphemism, but “slang” is considered to be men’s preference. From the study people can see that men and women have their own vocabulary choices in achieving emphatic effects. Though in the area of vocabulary, many of the studies have focused on English, we cannot deny that sex differences in word choice exist in various other languages. People need to learn to make these distinctions in their childhood” (Xia, 2013, p. 1486).

With the instinctive attitudes of women, women are given the knowledge that men are strict and clear when they use words such as my dear, sweetheart (Xia, 2013).

b. Dialect Variation

“All known societies classify people at birth as “male” or “female” according to the anatomical distinctions indicating their potential reproductive role, but this is in practice a social classification, relating biological sex to a wider set of social practices, norms, and relations” (Dunn, 2013, p. 2).

Even though the actions of talking and dressing are thought to be individual, contrary to this, the society is shaped by the effects on the individual. In addition, even a small group in workplace and the people ,who is living in the same country, are able to direct people’ way of speaking, which is called dialect. “The term ‘dialect’ is used in the variationalist tradition to refer to systematic linguistic variation statistically associated with a sociolinguistic parameter, and as such can be difficult to delimit” (Dunn, 2013, p. 2). It cannot be said that there is a restriction about dialect, because it can be influenced even by people’s socio-cultural considerations.

“It is well known that gender interacts with other social variables that affect phonology, including regional and ethnic dialects and gender-correlated differences in the production of prestige forms and innovative forms of speech have been reported frequently in the sociolinguistics literature” (Clopper, Corney, & Pisoni, 2005).

The use of different languages by women and men can also be seen as the reason for the formation of various kinds of dialects. “It is useful to emphasize the importance of social gender rather than biological sex in the use of different languages and also it is social gender that the vocabulary that both genders can use is intrinsic to female or male speech” (Demir, 2010, p. 102). Twenty years ago, in some villages of Alanya in Turkey, where dialect research was done, women were calling their husbands even their younger brothers to call them “big brother”, and the brides were hesitant to pronounce their father’s name directly.

According to ancient customs and traditions, women regard men as more important than themselves, and respect their inability to say even their name. That is why there are gender differences on the ground. When women come together, they talk differently, and small communities create divergence in the use of language. Men are more relax to use language than women (Demir, 2010).

“The importance of biological sex in communication systems extends beyond humans too, for example, in many bird species the songs of males and females are distinct. Furthermore, it is not uncommon for birdsong to be transmitted through social learning, leading to vocal repertoires which are differentiated by geographical region – referred to as regional dialects” (Dunn, 2013, p. 2).

That is to say, it should not be regarded as simply a difference in language acquisition or language use. Language can be affected by everything.

“In an extensive review, Labov summarized the observed production differences with the three principles shown below.

Women use more prestige forms than men, and, conversely, men use more nonstandard forms than women.

Women favor incoming prestige forms in changes from above, which are defined as involving forms associated with a high level of social consciousness.

Women also tend to lead in changes from below, which involve variation that has not become stereotyped or associated with particular social groups. However, in a minority of cases such as diphthong centralization on Martha’s Vineyard, men can lead in changes from below” (Clopper, Corney, & Pisoni, 2005).

Dialects are more determined by women because the first language formation of children begins when they are with family members. While the family is the first teacher, the first tutorial in the family is none other than the mother. After children start to go to school with their first steps in life, which is the first place of children meet with language communities.

” … boys’ acquisition of the men’s dialect accompanies social and ritual recognition of their entering the men’s world. This probably contributes to the historical instability of gender dialects, as the interruption of traditional social practices may also interrupt men’s dialect acquisition. A number of descriptions of gender dialects explicitly mention that in quoted speech the gender dialect of the person quoted may be used, even where this is otherwise not the gender dialect used by the speaker” (Dunn, 2013, p. 4).

The use of language by men and women can be completely different, depending on what the women and men are in. The actual formation of the dialect can also be affected by gender differences.

Gender dialects can also be lexical changes. In other words, it is possible for men and women to pronounce the same word differently. The meaning of word is unchanged, and this change can only affect the writing of the word. “For example, certain nouns in Awetí which are vowel initial in the women’s dialect are pronounced with initial n- in the men’s dialect. There are also cases where men’s and women’s lexemes have no obvious etymological relationship” (Dunn, 2013, p. 6). In short, the language of men and women reflects different uses, even in dictionary, with different appendices.

In a research conducted that the language is found as the dialect of Yanyuwa and also in Yanyuwa the gender differences are very apparent. Below the data presented about this research.

“The Yanyuwa language (Pama-Nyungan) of northern Australia has a complex and well-described gender dialect distinction. The main difference between the dialects is in syntactic categories and their morphological marking: The female dialect distinguishes two noun classes, ‘male’ and ‘masculine’, where the male dialect only has one. In the female dialect ‘male’ and ‘masculine’ noun classes are indicated by different prefixes see Table 3” (Dunn, 2013, p. 10).

Table3. Noun class prefixes in the female dialect of Yanyuwa

Noun Class nominative Non-nominative

Male Nya- Nyu-

Masculine ∅ ji-

“In the male dialect these correspond to a single noun class, marked by different prefixes in non-nominative cases and by zero in the nominative (see Table 4), like the women’s masculine-class and the women’s dialect also makes more distinctions in third-person pronouns than the men’s dialect. These distinctions are highlighted see Table 5” (Dunn, 2013, p. 10).

Table4. Noun class prefixed in the male dialect of Yanyuwa

Noun class Nominative Non-nominative

male/masculine ∅ ki-

Table5. Third-person pronouns in male and female Yanyuwa dialects

women’s dialect men’s dialect

he yiwa yiwa

she anda anda

it alhi anda

“The Yanyuwa language was no longer being transmitted at the time that the gender dialects were documented, so we only have speakers’ reminiscences of how language acculturation happened rather than direct observations, … all children acquire the women’s dialect first from their caretakers. In Yanyuwa society, boys underwent formal initiation at the age of ten, after which they were expected to speak men’s dialect, and rebuked if they spoke the women’s dialect by mistake and older speakers could use the inappropriate gender dialect for various kinds of humorous or rhetorical effect” (Dunn, 2013, p. 11).

c. Children Genders in Language Acquisition

It is difficult for people to understand when they are newborn babies whether they are girls or boys. In other words, when babies are born, they are introduced to the concept of gender not by themselves but by the consciousness of their parents, and babies are traditionally introduced with blue hats for babies who are girls with this concept and babies who are boys. This means that different objects and ornaments are used in certain cultures to specify the gender of the babies clear. “In addition to the visual, color-coding sign, another early attribution of gender is the linguistic event of naming the baby. Moreover, from early childhood girls and boys are interpreted differently, and interacted with differently and people usually behave more gently with baby-girls and more playfully with baby-boys” (Savickienė & Kalėdaitė, 2007, p. 285). That is, the characteristics given to infants by masculinity and femininity can be made either by name, toys or through clothes.

Girls are more often treated with polite attitudes, such as toy dolls, while boys are more likely to be warned than girls because they can play their own games quietly like cotton, while boys like to play with balls, cars and even toy guns and need to play these toys they are the mobilizations that they can make to consume energy.

“As for linguistic aspects, there is enough evidence to claim that girls are usually more advanced in language development than boys (it is obvious, though, that individual differences exist). Girls begin to talk earlier; they articulate better and acquire a more extensive vocabulary than boys of the same age. Studies of verbal ability have shown that girls and women surpass boys and men in verbal fluency, correct language usage, sentence complexity, grammatical structure, spelling, and articulation” (Savickienė & Kalėdaitė, 2007, p. 286).

The capacity of each child is different like every individual, and the talk times of the children in the same family can be different. Girls can learn to talk faster than boys, but this may be inherited but there isn’t certainty about inherited.

“A language may have two or more such classes or genders. For a noun to belong to a particular declension class often implies that it also belongs to a particular gender. The classification very often corresponds to a real world distinction of sex. Correlations of this sort are, however, never perfect; that is, membership in a particular gender is most often a matter of arbitrary stipulation” (Savickienė & Kalėdaitė, 2007, p. 286).

Meanings of the names can vary according to the gender, but it is a change that depends on the enjoyment of the people to give gender characteristics.

“Research shows that children are capable of distinguishing differences in biological sex at around the age of 2; 62. The category of gender becomes an issue in the process of language acquisition when a child finds out that sex is an inherent property and does not change even if clothes are changed” (Savickienė & Kalėdaitė, 2007, p. 287).

In this period, babies can recognize which gender group they belong to and can make themselves a member of the group they belong to.

“In a study involving Lithuanian children, the researcher suggested that genders of children have little effect on language acquisition. Most researchers claim that during the early stages of language acquisition it is problematic for a child to distinguish between genders because the category of gender is a problematic issue in itself. In data, words which have distinct formal gender markers already appear in early recordings, at 1; 7. The frequency of nouns marked masculine or feminine is displayed in Table 6 and Table 7” (Savickienė & Kalėdaitė, 2007).

Table6.The distribution of masculine and feminine nouns in Rūta’s speech (1; 7–2; 5)” (Savickienė & Kalėdaitė, 2007, p. 288).

1; 7 1; 8 1; 9 1;10 1;11 2; 0 2; 1 2; 2 2; 3 2; 4 2; 5 Total Total

FEM 18 131 426 387 310 396 359 369 337 422 374 3529 40%

MASC 18 167 469 698 454 479 528 622 622 643 487 5187 60%

(“4 Rūta is a first-born and only child of a middle-class family living in Vilnius. Her speech was recorded in natural everyday situations by her mother, a philologist. Recordings were made three or four times per week; they lasted about fifteen minutes each. For the present study we have chosen to analyse Rūta’s speech covering the period from 1; 7 to 2; 6. The corpus consists of 35 hours of recordings. The recorded speech was transcribed by the girl’s mother according to the requirements of CHILDES, or Child Language Data Exchange System

5 Monika is also a first-born and only child of a middle-class family living in Kaunas. The corpus consists of diary remarks and almost 45 hours of recordings (transcribed and only partly coded according to CHILDES; therefore, we were not able to provide the statistical data) (Savickienė & Kalėdaitė, 2007, p. 288)”.

“Table7.The distribution of masculine and feminine nouns in Mother’s speech (1;7 –2; 5)” (Savickienė & Kalėdaitė, 2007, p. 289).

1; 7 1; 8 1; 9 1;10 1;11 2; 0 2; 1 2; 2 2; 3 2; 4 2; 5 Total Total

FEM 68 282 789 707 308 553 408 410 436 387 477 4825 45%

MASC 100 391 827 1096 458 583 430 529 561 500 452 5927 55%

“The data show that masculine and feminine nouns in Rūta’s speech appear in equal numbers only during the 1; 7 period (see Table 6). Starting with 1; 8 and up to the period of 2;6 masculine nouns are more frequent. The same tendency is noticed in Mother’s speech: during the entire period of observation masculine nouns are more common than feminine nouns. The 1; 10 period is exceptional in this respect: masculine nouns are especially dominant, and the same tendency is observed in Rūta’s speech” (Savickienė & Kalėdaitė, 2007, p. 289).

This work can be explained by the language change of Ruta because Ruta has more masculine language features in this study.

“The correct usage of feminine nouns during the early period of language acquisition in Rūta’s case could be explained within the framework of a hypothesis which relates the early and unproblematic acquisition of certain grammatical categories (e.g. of gender or case) to the child’s gender” (Savickienė & Kalėdaitė, 2007, p. 289).

It is important to note that girls are more likely to be in official use than men.

III. CONCLUSION

As a conclusion, the effects of language and gender on each other have been researched when the language and gender issues are both intriguing and open to research. The subject of language and gender, which is a subject of interest in this study, has been researched. Language and gender are the subjects of sociolinguistic. The word meaning of the language is given and it is emphasized that the concept of language is transferred from culture to culture. In the research on the effect of gender difference on language, the idea that gender difference is the influence on the language and the ideas that there is no effect are presented to the readers. It has been stated that the changes in language use are influenced by social factors because women and men use languages in different areas. In other words, it is emphasized that the person who uses the language in the first place has an influence on the usage of the language, even the political thought, regardless of gender.

In a study, some stereotyped examples were given for the physiology and formation of male and female language use. It is presented that women’s voice tone is higher than men’s voice tone because of women’s frankness. It is presented that men’s level of voice tone is of interest to men’s confidence and that low voice tone is a determinant of the height of self-confidence. In a test conducted against the stereotypical beliefs that women are always interrupted in men’s, men have been shown to be more interrupted than women. In the use of color words, adjectives and pronouns, it is stated that women use more ornate language and men prefer to use simple, clear and understandable language.

Moreover, William Labov summarized the explanation of the matter in relation to the accent. It is emphasized that in all the cases studied in this explanation, women have different language uses than men and that gender effect is due to different dialect usage.

In the Yanyuwa language (Pama-Nyungan) of northern Australia, ıt has been shown through research that the difference in the language is more clearly recognizable, and it has been shown that there is a difference in usage in males, where the difference is less in females than in males. The Yanyuwa dialect is not investigable at present and the dialect difference is emphasized by making observations about it (Dunn, 2013).

Additionally, ıt is presented in a research that the influence of the sex of the children in the acquisition of language is very little. In Lithuanian, a single word for children’s language acquisition is asked of girls and boys and it is understood that there is little difference in the learning and even pronunciation of the word.

Finally, research on gender and language concepts has been conducted in this study and it has been determined through researches and observations that the effects of gender discrimination on language use. It has also been reported that even in the use of dialects, the difference in gender causes different words to be produced and that there is dialectal difference in this number. In other researches, it is seen that the concept of gender is not as effective as adults in children who learned a different language from the mother tongue, and the study is concluded by reaching the knowledge that the concept of gender may be effective in younger children.

For further studies it is suggested that the experiments conducted for such studies should be investigated with more current data, and the gender difference in the human brain should be reported in the field of language use. It is advisable to conduct extensive field search so that more information about the language can be found and the effects of the language on the language, not the gender, can be investigated.

IV. REFERENCES

Clopper, C. G., Corney, B., & Pisoni, D. B. (2005). EFFECTS OF TALKER GENDER ON DIALECT CATEGORIZATION. Lang. Soc. Psychol., 24(2), 182-206.
Demir, N. (2010). Türkçede Varyasyon Üzerine. Ankara Üniversitesi Dil ve Tarih-Coğrafya Fakültesi Türkoloji Dergisi, 17(2), 102.
Dunn, M. (2013). Gender determined dialect variation. Max Planck Institute for Psycholinguistics, 1-27.
Jiang, H. (2011). GENDER DIFFERENCE IN ENGLISH INTONATION. (pp. 974-977). China: Sichuan University.
Konig, G. Ç. (1992). DlL VE CİNS: KADIN VE ERKEKLERİN DİL KULLANIMI. Dilbilim Araştırmaları, 25-36.
Korkmaz, Z. (n.d.). TÜRK DİLİNE GÖNÜL VERENLER. Sosyal Bilimler Dergisi, 1-11.
Louazani, K. (2015). Gender Linguistic Behavior. TLEMCEN: Prof. Smail Benmoussat.
Newman, M. L., Groom, C. J., Handelman, L., & Pennebaker, J. W. (2008). Gender Differences in Language Use. 45, 211-236.
Savickienė, I., & Kalėdaitė, V. (2007). The Role of a Child’s Gender in Language Acquisition. EESTI RAKENDUSLINGVISTIKA ÜHINGU AASTARAAMAT, 3, 285-297.
Xia, X. (2013). Gender Differences in Using Language. Theory and Practice in Language Studies, 3(8), 1485-1489.

2018-12-19-1545215217

Emergence of Augmented reality (AR) and Virtual reality (VR): college admission essay help

Abstract

Augmented and virtual reality have been used widely in Science, Technology, Engineering and Mathematics. These technologies promise to change our lives unlike any other. While VR replaces your vision, AR adds reality to it. From learning new skills to being able to meet people virtually but with an immerse experience with the help of 3D graphics. AR/VR has made humans witness what once used to be impossible. AR and VR have numerous uses and huge benefits in the real world. Moreover, due to the onset of these technologies, the exploitation of real world objects in training has reduced by a reasonable percentage. This research paper mentions detailed information about AR/VR systems, requirements to build such systems, and their applications. Then, it presents the tools and software used for recreating realistic environments and a comparison between different types of AR and VR systems. After that, we optimize the road map for selecting an appropriate system according to the field of applications. Moreover, we discuss how AR and VR affect the human brain and produce simulations. Finally, the conclusion incorporates the future and scope of these technologies while covering their increasing usage in the medical industry.

Keywords – Virtual Reality, Augmented Reality, Immersion, Virtual Training, Simulation, AR/VR systems.

I. INTRODUCTION

Until recently, Augmented reality (AR) and Virtual reality (VR) technologies have served primarily as an inspiration for focus writers and special effects teams. After a long drought, in the 1990s these technologies have now become bigger than ever. Computer generated 3D environments allow the users to enter and interact with alternate realities. Virtual Reality (VR) and Augmented reality (AR) is a popular name for an absorbing, interactive experience in which a person perceives a synthetic environment by means of a special human-computer interface. The users are able to immerse themselves to varying degrees in the computer’s artificial world.

AR/ VR have become one of the main technologies to be discussed regarding their applications, usage, and their different types that can achieve huge benefits in the real world. The VR considered as full visualization environment using appropriate computer technologies whereas AR is used to add a more realistic experience to it. In most of the learning environment, VR becomes possible for many learners or trainees to simulate the real world. The benefits of this technology often start with computer graphics and continue for long times.

Nowadays, Augmented and Virtual reality is widely used in the tourism industry in collaboration with the photography and videography industries. This enables users to experience the magical destinations before actually visiting. Such immersion technologies include additional sensations, movements (for example a  roller coaster simulator) and feeling (for example if the user is sprayed with water) and also the sense of smell. Further in this research paper we will discover and discuss in depth, the numerous benefits of AR and VR.

II.EMERGENCE OF VR AND AR

II a. VIRTUAL REALITY

The reign of virtual realities started in the year 1965. Jaron Lanier, the founder of VPL technologies was the one to introduce the term virtual reality with a research paper titled, ‘THE ULTIMATE DISPLAY’. Later it became more acknowledged by the public as a medical toy comprising, a helmet, gloves etc. The emergence of VR can be highlighted by the following:

A. The Sensorama (1975, Morton Heilig)

The Sensorama consisted of multiple sensors, that could make a chromatic film that previously recorded to be augmented by clear sound, smell, wind and other senses. Sensorama made people experience interactive cinema. Fig 1, shows the Sensorama machine.

B. The Ultimate Display (1965, Ivan Sutherland)

The Ultimate display, was an attempt to combine interactive graphics with sound, smell and force feedback to imitate the real world. It suggests using a Head Mounted display as a window for VR. The Ultimate Display, creates an illusion that you are in a room where a computer can control the existence of matter. Fig 2, shows the Ultimate Display.

C. The Sword of Damocles

The Sword of Damocles is neither system nor the early concept of the VR. It considered as the first hardware of VR. The first Head Mounted Display (HMD) constructed by Sutherland. It contains sounds as stereo updated due to the position and navigation of the user. It is the implementation of the ultimate display.

Fig 2, The Ultimate Display

D. GROPE (1971, University of North Carolina)

GROPE is a “prototype of a force-feedback system”. It allows users to feel a simulated computer force. Grope consists of a simple glove with a specific structure to give sensible feedback with “mechanically complex exoskeletal hand masters”. It aimed at combining both haptic and a visual display and was used by chemists for a drug-enzyme docking procedure. Fig 3, shows the GROPE system.

Fig 1. The Sensorama Machine

Fig 3, GROPE Force Feedback Display

E. VIDEOPLACE (1971, Myron Krueger)

It is a virtual or conceptual environment with no real existence. The VIDEOPLACE, controls the relationship of the images of users and the places in a graphics scene. The imagination shadow of users in VIDEOPLACE system is determined by the camera that posted on a screen. Such systems, enable users to interact with other participant objects.

A VIDEOPLACE consists of two adjacent rooms and the camera captures the gestures, each participant can be seen by the other in both the rooms. Fig 4, shows the working of videoplace.

F. VIVED (created in 1984)

VIVED is an abbreviation of “Virtual Visual Environment Display” that was created at NASA Ames. It consists of a stereoscopic monochrome HMD. It was created to enable people to describe their own digital world for other people to see it as a 3D space. Fig 5, shows an example of VIVED.

Fig 4, The VIDEOPLACE

Fig 5, VIVED

G. VPL (Data Glove created in 1985 and the Eyephone HMD created in 1988)

VLP is a company who manufactures and created Data Glove and Eyephone HMD as the first commercially available hardware of VR for the public. The Data Glove was used as an input device. The Eyephone is a head mounted display used to provide a feeling of immersion.

H. BOOM (created in 1989, Fake Space Labs)

Binocular Omni-Orientation Monitor (BOOM) is “a small box containing two CRT monitors that can be viewed through the eye holes.” In the system of BOOM, the user can take the small box with his/her eye movements, move it through virtual environments and keep track of the box by the eye orientation Fig. 6 shows the BOOM machine.

I. Virtual Wind Tunnel (created in 1990)

Virtual Wind Tunnel developed to allow the monitoring and investigation of flow fields included with BOOM and Data Glove. The Virtual Wind Tunnel is developed at NASA Aimes [3]. Fig. 8 shows an example of the Virtual Wind Tunnel. This type of VR helps scientists to utilize a Data Glove to input and manipulate “the streams of virtual smoke in the airflow around a digital model of an airplane or space shuttle. Moving around (using a BOOM display technology) they can watch and analyze the dynamic behavior of air flow and easily find the areas of instability”.

Fig 6, Binocular Omni-Oriented Monitor (BOOM).

J. CAVE ( 1992)

CAVE is a scientific virtualization system. The user wears a LCD shutter based glasses in order to experience the simulation. It consists of three walls and one door as the fourth wall. Projectors are used on all the flat wallas which allows the user to get a better sense of full immersion.

II b. AUGMENTED REALITY

The term Augmented reality was coined by a Boeing researcher named ‘Tom Caudell’ in the late 1990s. He was asked to create a replacement for Boeing’s current system of large plywood boards with wiring instructions for each aircraft being built. Caudell and his co-worker David Mizell proposed a head-mounted display for construction workers that superimposed the position of cables through the eyewear and projected them onto multipurpose, reusable boards. Instead of having to use different boards for each aircraft, the custom wiring instructions could instead be worn by the workers themselves. The emergence of Augmented Reality has been highlighted as follows:

A. Virtual Fixtures (created in 1992, USAF armstrong Labs)

Because 3D graphics were too slow in the early 1990s to present a photorealistic and spatially-registered augmented reality, Virtual Fixtures used two real physical robots, controlled by a full upper-body exoskeleton worn by the user. To create the immersive experience for the user, a unique optics configuration was employed that involved a pair of binocular magnifiers aligned so that the user’s view of the robot arms were brought forward so as to appear registered in the exact location of the user’s real physical arms. Fig 7, shows the virtual fixture.

B. Hybrid Synthetic vision system (created in 1998)

In 1998, NASA created a hybrid vision system of their X-38 spacecraft. The system leveraged AR technology to assist in providing a better navigation during flight training. Fig 8, shows the mock-up of the map data displayed on a pilot’s screen.

Fig 7, Virtual Fixtures

Fig 8, AR component displayed map data on the pilot’s screen.

C. The AR Game (created in 2000)

AR Quake launched – the first AR game. As well as a head-mounted display, players had to wear a backpack containing a computer & gyroscopes.

D. AR Tennis ( created in 2005)

The early 2000s saw the debut of augmented reality apps for smart phones. One of the first was AR Tennis – a two-player AR game developed for Nokia phones. AR Tennis is the first example of a face to face collaborative AR application developed for mobile phones. In this application two players sit across a table from each other with a piece of paper between them with a set of AR Tool Kit markers drawn on it. Computer vision techniques are used to track the phone position relative to the tracking markers. When the player points the phone camera at the markers they see a virtual tennis court overlaid on live video of the real world. Fig 9, shows the AR Tennis Court.

Fig 9, AR Tennis Court

E. Pokemon Go (created in 2016, Nintendo and Niantic)

Niantic and Nintendo launched Pokemon Go – the hugely popular location-based AR game that put AR on the mainstream map. In the case of Pokémon Go, players traverse the physical world following a digital map, searching for cartoon creatures that surface at random. People look through their smartphone cameras to find Pokémon. When an animated creature appears, they toss Pokéballs at it until it is subdued.

Fig 10, Augmented reality in Pokemon go.

2021-12-28-1640704913

Hospitality and Medical Marijuana

Currently there are 28 states and Washington DC that have legal medical marijuana available. This is more than half of the United States. Even with the majority, employment law and the hospitality industry do not have much guidelines to help pave the way in legal terms. With marijuana, illegal federally and legal at the state level it causes a surplus of confusion. Much of the reason is that there are not very many case laws to reference on both the employment and business side of the industry. There are many questions industry professionals have. Here are a few: Are employers legally able to terminate employees based on positive test results? Do employees with disabilities have different protections? How are employees under federal contract governed? What protections do guests and customers have? Does the ADA cover patients as guest?

Whether an employer may legally fire you for failing an employee drug test because you used medical marijuana, depends on whether or not your state has passed a medical marijuana law with employment discrimination protections. In Delaware, like other states that have similar legislation, the Medical Marijuana Act decriminalizes the use of medical marijuana in an attempt “to protect patients with debilitating medical conditions, as well as their physicians and providers, from arrest and prosecution, criminal and other penalties, and property forfeiture if such patients engage in the medical use of marijuana.” Similar to only a few statutes, the Act includes provisions that provide additional protections to employees. The Act prevents employers from discriminating against an employee “in hiring, termination, or any term or condition of employment, or otherwise penalizing a person” for his “status as a cardholder” or because of a “positive drug test for marijuana components or metabolites.” In majority of the states an employer can impose discipline including termination for a positive marijuana drug test.

While the Americans with Disabilities Act of 1990 protects most employees with serious medical conditions from discrimination, it doesn’t protect their use of medical marijuana. Again, depending on your states statutes very few jurisdictions offer explicit protections for patients. The minority provide protections where employers are prohibited from discriminating against an employee who has tested positive for marijuana and is a registered medical marijuana patient, if he or she doesn’t have a “safety sensitive” job, such as heavy-machinery operator or airline pilot.

One of the few cases involving medical marijuana is James v. City of Costa Mesa, the Ninth Circuit held that the ADA does not protect individuals who claim discrimination against them because of medical marijuana use. The court reasoned that the ADA excludes from coverage disabilities based on illegal drug use. Because marijuana is illegal under federal law, medical marijuana use is not covered under the ADA, even if state has legalized the medical use of marijuana. While it is not unlawful to discriminate against an applicant or employee based on their marijuana use, it is still unlawful to discriminate against an applicant or employee for an underlying disability. Employers should use caution in handling these situations to minimize risk that any adverse employment decisions were based on knowledge of illegal marijuana use and not on knowledge of an underlying disability.

Employees under federal government contract such as employees of federal agencies or employees of private business that have federal licenses and regulations. Casinos for one in Nevada the Gaming Control Board issued an industry notice telling gaming license holders — and even prospective license applicants — to stay far away from medical marijuana. Under federal law, distribution, possession and sale of the drug is a crime. The Control Board based its ruling on federal law. For all the aspiring and current spies, diplomats and F.B.I. agents living in states that have liberalized marijuana laws, the federal government has a stern warning: Federal laws outlaw its use — and rules that make it a firing offense for government workers — have remained rigid. Recruiters for federal agencies are arriving on university campuses with a sobering message that marijuana use will not be tolerated.

In addition to employee use of marijuana businesses are also concerned about the use by guest. The simplest approach for innkeepers is to treat marijuana users like tobacco users. If tobacco is prohibited in guest rooms and in public areas so is marijuana. If a guest’s actions, such as smoking marijuana disrupts other guest, the situation should be treated as any other disturbance. Ask the patron to cease the disruptive behavior and if they do not comply contact the authorities and evict them. With new legislation, each business should familiarize them with the statutes of their state. There are no specifics until more case law present itself. In Colorado, it is up to the discretion of the hotel if it allows marijuana smoke to be consumed in their smoking rooms. Denver city laws prohibit marijuana consumption on hotel balconies if visible from any public place.

There is no consensus on how to handle medical marijuana and the ADA at hotels. This is due to there being no know “case law” on this issue. The issue has not presented itself to hoteliers on a wide scale. To avoid any potential law suits, try to place any marijuana users in smoking rooms when available. If you are a completely “nonsmoking” hotel, the marijuana user should be treated as a cigarette smoker. They must leave the building and be accommodated in s designated area. They should be treated no differently than a tobacco user. If they do smoke in the room and you have the proper notification and signage you may charge them the normal hotel smoking fee. When convenient adding medical marijuana to these signage’s is recommended.

State laws currently provide greater protections to marijuana users. Although, those that have considered such issues in court were generally found in favor of an employer’s right to act against an employee who tests positive for marijuana. Majority of employees who work in a state with the world’s most powerful medical marijuana laws will have to choose between using medical marijuana and work. Also, remember every hotel has the right to ask a guest who is smoking marijuana to stop. Unless they have a legitimate prescription from a licensed physician. If the guest cannot provide the paper work you can prevent them from doing so. If they refuse you may get the authorities involved. Hopefully in the near future there will be more case law on medical marijuana and its roll in employment, business and the ADA throughout the hospitality industry.

2016-11-14-1479151657

Human rights in Great Britain: essay help free

The first assessment for Care in Contemporary Society is to carry out research into Human Rights in Great Britain. The research to carry out included; providing a timeline of important progress within the Human Rights Legislation, stating the critical principles in the operation of the Human Rights approach and explaining the usefulness of supporting a Human Rights access to care. Human Rights are the basic rights which everyone is entitled to no matter where they are from, what religion they believe in or who they are. Human Rights apply to all to ensure everyone is treated equally. Human Rights has been a building process for many years as many laws have been put in place to ensure everyone is treated equally however not everyone follows the laws. This had to be changed with fighting for Human Rights which is still a growing process.

A timeline of important process in Human Rights Legislation:

The Representation of the People Act 1918: This was the beginning of the female suffrage in Great Britain. However, this did not give all women the right to vote, meaning only women who owned property who were 30 and older were given the right to vote. Whereas males who are 21 and older were given the right to vote therefore women were still at a disadvantage with their voting rights. 10 years later (1928) all women were given equal rights to vote to men.

Human rights commission – The Universal Declaration of Human Rights 1948: This is the first worldwide Human Rights agreement which includes all legal and cultures backgrounds from all over the world. The Universal Declaration is outlined as “The foremost statement of the rights and freedoms of all human begins” This had been acknowledged through The General Assembly of the United Nations in 1948. This legislation was agreed upon after the horrendous events of world war two to ensure everyone was entitled to the basic Human Rights.

[https://www.equalityhumanrights.com/en/what-are-human-rights/what-universal-declaration-human-rights)]

Human rights act 1998: This act states that

“Human rights are based on important principles like dignity, fairness, respect and equality. They protect you in your everyday life regardless of who you are, where you live and how you chose to live your life.”

[https://www.citizensadvice.org.uk/law-and-rights/civil-rights/human-rights/what-are-human-rights/]

This means that the human rights act is put in place to ensure all humans are treated equally no matter what. Examples of human rights include:

The right to freedom of religion and belief
The right to respect for private and family life
Your right to a fair trial

Equality and human rights commission 2007 – This commission in Scotland is fighting against discrimination. The commission states that everyone should be “treated fairly and with dignity” [https://www.equalityhumanrights.com/en/commission-scotland/about-commission-scotland] However discrimination still manages to take place, although having this commission in place allows those who are being discriminated against to be able to fight for their Human Rights.

Stating the critical principles in the operation of the Human Rights approach:

The Human Rights path is about ensuring people know and declare their rights. Everyone must be aware that a person is held responsible for ensuring the rights of an individual are provided and met. A Human Rights approach states that not only people who use the service should have their rights in place and provided with but the other individuals around e.g. A family coming into a hotel, the family should know and be able to declare their rights however the staff in the hotel have also got their rights.

The panel principles are the critical principles which are required in the operation of the Human Rights approach:

Participation – Everyone is able to cooperate in any changes that could influence their Human Rights.
Accountability – To ensure successful observation and to also ensure that breaking or failing to observe a law agreement will lead to being solved.
Non- discrimination and equality – All forms of discrimination should be abolished.
Empowerment – Everyone should be aware of their rights.
Legality – Everyone should be aware that rights are legally kept in force.

The usefulness of supporting a Human Rights access to care:

The usefulness of supporting a Human Rights access to care is important as it may be harder for people in care to always be up to date and know their rights. For example, People with dementia have the exact same human rights as every other person in this world however due to the illness they face many obstacles to realise every right they are entitled too. A group of Scottish parliament organisations representing the interests of people with dementia are brought together and work towards supporting people with dementia to ensure their rights are recognised and respected. Dementia over time causes the individual’s capacity to make choices in their everyday life’s. For example, someone with Dementia may need someone to support them with their banking. However, for this it has to be someone the individual trusts so that the individual’s rights are not broken. Therefore, it is important to adopt a Human Rights approach to care to ensure the safety and right to personal and private life.

Conclusion:

Overall Human Rights are important for everyone to ensure each and every individual is treated equally and fairly. The fight for Human Rights has been going on for an extremely long time and the fight to destroy the Human Rights which are not being put in place for every individual. Therefore, in years to come everyone should have their Human Rights set in place.

Reference sheet:

United Nations – Universal Declaration of Human Rights [online] Available at: http://www.un.org/en/universal-declaration-human-rights/ [Accessed on 17th October 2016]
About the commission in Scotland [online] Available at: https://www.equalityhumanrights.com/en/commission-scotland/about-commission-scotland [Accessed on 17th October 2016]
Votes for Victorian Women [online]: Available at: http://www.bbc.co.uk/programmes/b01r9c9r [Accessed on 17th October 2016]
Human Rights [online] Available at: https://www.citizensadvice.org.uk/law-and-rights/civil-rights/human-rights/ [Accessed on 17th October 2016]
Scottish Human Rights Commission [online] Available at: http://www.scottishhumanrights.com/application/resources/documents/SHRC_HRBA_MHS_leaflet.pdf [Accessed on 20th October 2016]
Care about Rights- What is a Human Right Approach? [online] Available at: http://www.scottishhumanrights.com/careaboutrights/whatisahumanrightsbasedapproach [Accessed on 20th October 2016]
Developing the over-arching principles and NCS. What is meant by a “Human Rights Based Approach?” [online] http://www.newcarestandards.scot/wp-content/uploads/2015/10/Human-Rights-Based-Approach.pdf [Accessed on 20th October 2016]
Charter or Rights for people with Dementia and their careers in Scotland [online] Available at: http://www.scottishhumanrights.com/application/resources/documents/FINALCharterofRights.pdf [Accessed on 21st October 2016]

2016-11-16-1479311929

TG/Travel UK organisational structure: online essay help

“The term organizational structure refers to the formal configuration between individuals and groups regarding the allocation of tasks, responsibilities, and authority within the organization” (Galbraith, 1987, Greenberg, 2011).

TG has a divisional organisational structure that is further split into functional structures. This is based on the assumption that Travel UK is an example that represents the remaining parts of TG in terms of structure. Travel UK has divisions (Airline, Commercial and Customer Operations) which have individual functional departments assigned to them. Divisional structures are organised by products or locations rather than functions (sales, finance, etc.). Divisional structures are decentralised, giving authority to the managers of the individual divisions to allow for well informed decisions to be made by the specialised manager overseeing the division (Fouraker and Stopford, 1968). Large organisations face complex issues due to global markets, multiple interdependent business activities and cooperation with other organisations. These factors require for complex decisions to be taken quickly. (Mihm, Loch, Wilkinson, and Huberman, 2010). This, combined with the rules and processes in place at Travel UK to guide managers in making such decisions objectively form a relatively secure foundation for TG’s volatile environments it is operating in. The divisional structure allows quick adjustment to factors impacting the operation. “Many corporations have developed and organisational structure consisting of relatively autonomous business units to achieve clear focus of skills and effort towards different markets, plus clear accountability of managers” (Hewitt, 2003) providing further evidence for the divisional structure being the most appropriate for globally operating TG. Another advantage is that the divisional structure’s autonomous nature tests and trains the division heads’ capabilities which enables the development of General Managers (Fouraker, Stopford, 1968). A disadvantage of this structure is that knowledge can be contained within each unit limiting the sharing of expertise among the organisation (Steiger, Hammou, Galib, 2014). Each operational division having its own sphere of competence make it likely for this to be the case within TG.

Each division within TG/Travel UK has its own business support units divided by function (sales, HR, etc). Functional structures give each local business unit direct access to areas of expertise but this type of structure can foster a ‘silo mentality’ in which all the departments work for themselves and do not communicate with each other (Connor, McFadden and McLean 2012). TG has made the decision to merge some functions where duplication existed. According to Mintzberg, many other organisations have made the decision to accept functional duplication to make the divisions less dependant on one another (Steiger, Hammou, Galib, 2014) which gives greater market robustness compared to the shared services model (merged functional departments).

TG’s structure is also impacted by employee relations (ER) aspects. “ER is the process of managing both the individual and the group in terms of contracts, regulations and collective behaviour…” (Purcell, 2012). This includes the differences in conditions of employment among and within TG’s business units due to mergers with other organisations. One of the main reasons for unsuccessful mergers is poor integration. One of the motivations to harmonise conditions of employment following mergers (vertical integration) is that work of the same value needs to be compensated in a similar way in accordance with the Equality Act 2010. Not harmonising terms of conditions of employment may increase the risk of legal claims under equally pay, discrimination and employment protection laws (Suff, Reilly, 2007). Towers and Perrin (2003) explain that “disparate benefits and compensation policies need to be integrated to align the company’s employees with the senior management teams business objectives”. The delay of aligning terms and conditions of employment may result in increased staff turnover (Suff, Reilly, 2007). Mergers provide an opportunity to revise practices and policies. Not revising such policies and practices can have a negative impact on organisational capability (PWC, 2016). If following a merger, ways of working are different, it can create frustration and anxiety leading to additional turnover (Stafford, Miles, 2013). Further complexity to this issue is added by the the different conditions of employment and working practices exist not only in job descriptions but also in agreements that have been negotiated by the trade unions with TG. Such differences are described as “monumentally difficult problems” as they are covered under the UK’s Transfer of Undertakings (Protection of Employment) Regulations 2006 (TUPE) and “involve some forceful negotiating from any unions” (Levinson, 2014). TUPE is designed to to protect employment rights for workers who are being taken over by a different organisation. Not following the process can incur compensation and legal fees as in the case of the Ministry of Defence which paid £5,000,000 in an out of court settlement to 1,600 Unite (union) members) (Stevens, 2014).

Changing such practices would therefore be complicated due to the seemingly inadequate cooperation between TG and with the trade unions. Whilst the organisation holds monthly joint consultative meetings with the unions, the relationship between them appears to be strained. This is evident in Travel UK’s fear of strikes when making changes to the cabin crew hours and working practices. An example of the impact that a general lack of trust between the unions and organisations can have is the one of British Airways and the union Unite in the years of 2010/2011. In 18 months the cabin crew went on strike four times over issues such as working conditions, redundancies and benefits resulting in revenue loss and customer complaints (BBC, 2011).

TG’s current divisional structure is is only appropriate for the future if TG eliminates all factors that could hinder it from reaching its strategic goals. Some of them are:

Customer satisfaction: The strained relationship with the trade unions can result in reoccurring industrial action which can delay or stop services causing customer complaints.
Profit: The underlying fear of strikes and differences in terms and conditions of employment create an environment in which TG cannot react quickly to changes needed to remain profitable.
Staff engagement: The fractured conditions of employment and working practices among its business units can have a negative impact on engagement levels.
Sustainability: Different work practices among the organisation make it difficult to reach a common goal.

If these barriers are not removed, the structure of TG is not appropriate for the future. This statement is supported by the CIPD’s 18 key points for high performance and high commitment in workplaces. Two of them are outlined as commitment to single status for all employees as well l has holiday harmonisation (Tamkin,2004); both of which are currently not met within TG.

2016-12-27-1482850246

Importance of Human Rights approach to care

The Lunacy Commission was set up following the 1845 Lunacy Act. This Government appointed group of lawyers and doctors oversaw the conditions of the asylums in England and Wales. Appointed commissioners would visit the hospitals twice a year their main objective was to ensure that the hospitals ran safely and efficiently particularly in regards to treatments. The reports raised concerns particularly with Patients being certified insane, suicide prevention and excessive force when restraining/subduing those in the asylum. The report highlighted that a good amount of furniture had been provided for the use of inmates.

The government took over the building of asylums eliminating private enterprise.

The government passed legislation to regulate activities in 1845 and 1853 Hospitals were also registered for the first time. In 1890 the Lunacy act of 1890 was passed in response to public concerns that some patients were being wrongfully detained particularly women who had very little rights, women who were wealthy could fall victim to financial abuse through private arrangements her husband could have his wife certified insane and locked away clearing the way for himself to inherit any financial benefits. Although great strides were made protecting the rights of inmate’s privacy was minimal there was usually 50 inmates to any one ward, privacy was minimal, unit wings were closed to patients with each ward/wing being placed under lock and key.

1961 is considered as the year attitudes towards for institutionalized mental healthcare was changing. Enoch Powell had been appointed health secretary in 1960 and was given the task of reforming the nations crumbling health services, including the Mental Hospitals. During the conservative party conference in March 1961, Powell criticized the asylums.

He spoke of the transition to community based care, the horror that asylums inflicted on patients, Enoch Powell proposed a radical vision towards community health care, not only reducing costs to the state (Powell envisioned 15,000 fewer psychiatrists) and reducing psychiatric beds by 50%. Powell’s next point was regarding those he described as the ‘sub-normal’, and the requirement to assess their needs and develop a more concise understanding of the issues faced in managing and caring for those individuals. Powell also advocated community services and better co-operation between local authorities and medical staff. Powell believed that services should be more flexibility in regards to the services offered and the services should be more person-centered fitting the services to suit the induvial needs and rights.

Under the 1990 National Health Service and Community Care Act any adult aged 18 or over who is eligible for and requires services from their local authority have the right to have their needs assessed and be fully involved in the arrangement of services advocates also can play a vital role in ensuring their opinion is respected, and any plan is implemented to enable individuals to live as independently as possible the community care act of 1990 emphasized the importance enabling and tailoring support and services around the needs of individuals. Assessments are reviewed every year, unless there is change of circumstances or the individual or local authority feel another review would be beneficial.

Importance of Human Rights approach to care

The human rights approach within care allows support workers to “realize the potential” of service users, highlighting the importance holistic care and taking a holistic approach to care it emphasizes the importance of taking someone’s emotional, physical, spiritual needs of an individual. Although a support worker may care for the physical needs of a patient due to any physical disability, such as assisting with washing or preparing meals a human rights approach advocates the importance of taking into consideration dietary requirements such as the individual being a vegetarian or observing religious traditions such as a Muslim or Jewish service user not consuming pork. The service provider would also not discriminate against any service user because of their religion, sexual orientation or any criminal convictions. This allows services to feel comfortable with services available and can raise complaints or seek support without fear of reprisals or intimation, this not only promotes the individual’s dignity but also their human rights.

Underlying principles of Human Rights

The underlying principles of human rights particularly in relation to care can be broken down into the acronym PANEL.

Participation: Service users should as far as possible participate in the review of their care especially when their needs are being assessed and services are being allocated to offer the appropriate support that the individual requires.
Accountability: Services are held accountable by governmental appointed agencies such the care inspectorate in Scotland Set up by Scottish Government, and accountable to ministers, they ensure assure and protect everyone that uses these services. Using the person-centered approach to care and having a holistic view of care allows the service users understand what is expected of them when they receive support and services.
Non-Discrimination: The service provider would also not discriminate against any service user because of their religion, sexual orientation or any criminal convictions.
Empowerment: Services should empower the individual to make their own choices, such as how they dress, plan activities that suit their individual tastes and choices emphasizing individuality and choices.
Legality: Service providers/Employees follow the law and strictly enforce the recommendations ensuring the safety of service users and employees are strictly adhered to reporting any cases of assault, abuse or offences.

My Practice’s Human Rights approach

Enablement: I assist the service user to make and achieve their goals but they don’t do it for them they encourage them to fulfil them themselves with the support needed. It is important that the care worker works in partnership with a range of integrated services, such as occupational therapist, to assist in meeting the service user’s needs. I will try to uphold the service user’s independence and encourage them to achieve their goals. I ensure that I encourage the service user whilst supporting them rather than doing it for them.

Non-Discrimination: Service users have the right to live in an environment free from harassment and discrimination so it is important that the care worker considers all factors so that all their spiritual, cultural and religious needs are met. Service users have the right to complain without being discriminated against if they have not received the care that they are entitled to or if they have been discriminated against. I aim to treat the service user with equality and diversity. I will ensure that I do not discriminate against any of the service users and I will embrace the diversity of the service user’s disabilities, sexual orientation and religious beliefs and focus on their induvial needs rather than their lifestyle.

2016-12-27-1482863432

How did Watergate deepen the mistrust in the office of the President?: essay help

On August 9, 1974, 37th President of the United States of America Richard Nixon resigned from his executive post. Nixon was and still is, the only US President to ever resign. The Watergate Scandal had brought Nixon’s second term to an abrupt end and diminished of retaining some form of respectability and honor among not only Americans, but citizens of the World. Although the questions about Watergate still remain. One in particular is had the Watergate scandal exposed a logical problem requiring structural resolutions, or was it the unfortunate combination of a poor president and his unethical advisors? Essentially, how did Watergate deepen the mistrust established in the office of the President and in what ways did this affect America.

Statistically, Americans are profoundly unhappy with their government. While the majority of Americans feel proud to be American; in the 1990s, never more than 40% of Americans said that they trusted their government most of the time or just about always (McKay, Houghton, & Wroe, 2002, p.20). A evident majority think that politicians do not act in the best interest of the people, and believe that government is controlled by investments from corporation. During the Watergate scandal, Americans had been shocked by the crimes of the Nixon Presidency. Investigations by the press and congress had exposed previously unimaginable levels of corruption and conspiracy in the executive branch. Following Watergate, the publics faith in government had been shaken, since the assassination of President John F. Kennedy, the trust placed in government had been in decline. The assassination had stolen the remainder of President Kennedy’s life and deprived him of a impartial, balanced historical judgement. Watergate had done the same to Nixon, and taken the same opportunity for a fair assessment from him, although Nixon himself had pushed it to happen. In order to fully assess how Watergate damaged the trust placed in President Nixon, his whole Presidency needs to be evaluated; domestic policy, foreign policy, if Watergate was really to blame for this mistrust, or was the mistrust already there and Watergate had just agitated it.

In Monica Crowley’s 1996 book Nixon off the Record, President Nixon brings up some points for consideration which not only challenge Watergate, but question it’s actual Impact of the scandal. ‘As President, until Watergate, my approval polls were never really below 50%. Neither were Eisenhower’s’ (Crowley, 1996, p.115). The significance of this is that Nixon refers himself to Eisenhower, one of the highest regarded Presidents in modern history. Nixon’s Domestic policy involves not only his own policies but the policies of who came before him; President Lyndon B. Johnson’s Great Society was a war on Poverty and both, racial injustice and gender inequality. Some of the policies were carried-on by Johnson, as part of President Kennedy’s New Frontier Legacy. The Civil Rights Bill that JFK promised to sign was passed into law. The Civil Rights Act banned discrimination based on race and gender in employment and ending segregation in all public facilities. Yet, African Americans all over the country were still denied the right to Protection from law enforcement, access to public facilities, and fair financial prospects. Nixon saw this as unjust abuse of the system, calling it both unfair to African American’s and a waste of human resources which would benefit America’s development. Johnson also signed the Economic Opportunity Act of 1964; The law that created the Office of Economic Opportunity aimed at attacking the roots of American poverty. Although this was dismantled under Nixon and Ford, which allocated poverty programs to other government departments. Johnson’s popularity had dropped due to Vietnam; members of his own party were seeking the nomination for President and in March 1968, he announced to the people of the United States that he would not seek a second term. Despite criticism, under LBJ the Great Society did impact many of the poorer Americans the program was aimed at. The total number of Americans living in poverty fell from 26 percent (1967) to 16 percent (2012), Government action is literally the only reason we have less poverty in 2012 than we did in 1967 (Matthews, 2016). The Great Society was however, deemed ahead of its time, combining both the this and the Vietnam war created massive budget deficits and thus, as Howard Zinn neatly puts it, Johnson’s War on Poverty in the 60s became a Victim of the War in Vietnam (Zinn, 2005, p.601). Nixon’s major economical objective was to decrease inflation, by doing so he had to effectively end the Vietnam War. This he did not do, in fact, he expanded it despite announcing on December 8th 1969 that the war was soon to end due to ‘a conclusion as a result of the plan that we have instituted’ (History.com, 2009). While ending the war was not something Nixon could do instantly, the U.S. economy continued to helplessly fluctuate during 1970, this in turn resulted to a very poor performance from the Republican party in the midterm elections – The Democrats held major seats and was heavily in control throughout Nixon’s presidency.

His Presidency was not completely shadowed by Watergate, although it has stained his Legacy, looking beneath the surface of Nixon’s Administration, his domestic policy clearly impacted America’s poorest; Total domestic spending by the federal government rose from 10.3 % of the gross national to 13.7% in the six years he was President. Granted a portion of the increased domestic spending under Nixon was due to the delay in starting Great Society initiatives, but a lot of it was due to Nixon’s own plans. The New Federalism agenda, essentially pointed out that all others before Nixon had failed to impact, let alone solve both social and growing urban problems. His new federalism has been credited as a highlight of his presidency, “Nixon’s New Federalism provided incentives for the poor to work” (Nathan, 1996). Despite his efforts, Nixon could not take away the feeling from the American People that the American Dream was failing following the Assassination of all the Major Civil Rights figures. John F. Kennedy, Martin Luther King, Malcom X, and Robert F. Kennedy all within the space of 5 years. Upon this, the process of desegregation was also taking place in many southern states, which created an immense amount on tension between minority groups and whites. Although Nixon was for desegregation, many traditional Right Wing Republicans in the southern states would have felt very different about this matter and thus, alienated by the Nixon administration.

Some have the opinion that it wasn’t so much Nixon that created or perpetuated this aura of Mistrust in his office, as it was the Government Agencies that served him. It is believed that a number of federal services contributed to this mistrust, The CIA was secretive and faceless in a sense but the FBI took on a more public role, taking credit for their actions and influencing the press, on numerous occasions. FBI Director J. Edgar Hoover morphed the FBI into what Richard Gid Powers called ‘ one of the greatest publicity generating machines the country has ever seen’ (Powers, 1983, p.95). Americans having a favourable opinion of the FBI fell from 84% (1965) to 52% (1973). This fell again to 37% in 1975. On top of this, The FBI’s creditability was also damaged by Watergate. L. Patrick Gray, Nixon’s nominee after Hoover died, destroyed critical Watergate evidence. The Watergate investigation had revealed that all too often Nixon had used the FBI for political purpose. Kathryn S. Olmsted narrates how federal agencies abused their privilege; Watergate did what the Bay of Pigs had not; ‘it had undermined the consensus of trust in Washington which was a truer source of the agency’ s strength than it’s legal charter’ (Olmsted, 1996, p.15) – it showed that ‘national security’ claims could and would cover up activities which were nothing but illegal. In brief, Nixon’s New federalism was not new, throughout his political career he opposed Big Government programmes and had fought to restore more power to state and local level establishments. President Nixon did achieve a number of things, the restoration of power to lower level government and away from federal jurisdiction, is one example. A number of critics argue that although his domestic policy benefited minorities, the poor and women, his new Federalism, failed to surpass his administration as he fought a losing battle to preserve his presidency following Watergate.

Foreign Policy is where Nixon’s Presidency becomes more believable to have caused mistrust in his office. During his time in office, he and certain federal agencies covered up a number of major mistakes created by the government. The Tonkin Incident is essentially where it began for Nixon, on 2 August 1964, United States claimed that North Vietnamese forces had twice attacked American destroyers in the Gulf of Tonkin. Known today as the Gulf of Tonkin Incident, this lead to open war between North Vietnam and the United States. It furthermore foreshadowed the major escalation of the Vietnam War in South Vietnam. This incident brought Congressional support for the Gulf of Tonkin Resolution, passed unanimously in the house, and with only two opposing votes in the senate. This gave Johnson the power to take military action as he saw fit in Southeast Asia. By 1968, There were more than 500,000 American Troops in south Vietnam (Zinn, 2005, p.477). This Resolution was applicable to Nixon when he was sworn into office. Nixon soon introduced U.S. troop withdrawals but also authorized invasions into Laos and Cambodia. Nixon announced the ground invasion to the American public on April 30, 1970. He expanded the Vietnam war in a time that called for its end, this led to widespread protests across America, and his popularity among younger American’s plummeted after this. Not only was there disturbances from this, it was considered a military failure, Congress resolved that Nixon could not, and should not use American troops in extending the war without congressional approval. Historian Harry Howe Ransom states, ‘[Nothing in public hearings] suggests that Congress intended to create, or knew it was creating, an agency for paramilitary operations’ when accepting the Gulf of Tonkin Resolution (Howe Ransom, 1975, p.155-156). Suggesting that it was Nixon’s own doing that created this mistrust when concerning the Vietnam War. Although, Nixon was not to blame for the entry into the Vietnam war, LBJ took adavantage of an compliant Congress quietly to increase American involvement In vietnam, and so without telling the people what he was doing. LBJ’s time in office, then, saw the emergence of ‘Presidential Imperialism’.

Nixon also introduced new trends in diplomatic international relations for America. Nixon argued that the communist world had two rival powers — the Soviet Union and China. Nixon and close advisor Henry Kissinger exploited the relationship between the two to benefit America. During the 1970s, Soviet Premier Leonid Brezhnev agreed to import American wheat into the Soviet Union. Creating trade and improving the economy. Nixon surprised the nation when he announced that he would travel to Communist China in February 1972, and meet with Mao Zedong. Following this visit, the United States dropped its opposition to Chinese entry in the United Nations and groundwork was laid for diplomatic relations. Just as anticipated, this caused concern from the Soviet Union. Nixon hoped to establish a Détente, in May 1972, he made an equally significant visit to Moscow to support a nuclear arms agreement. The United States and the Soviet Union pledged to constraint the number of intercontinental ballistic missiles each would manufacture. It does seem that Nixon and Kissinger were playing with fire, simultaneously establishing relationships with both China and the USSR, but ultimately, it was a tactical move from the duo. From a foreign policy opinion, it was wise to establish foundations for a diplomatic relationship. However, in terms of domestic policy, the American people were mortified, Nixon had built his reputation as an anti-communist supporter, following this it could easily be seen as nothing more than horrible irony; it was believed that Nixon was inspiring Left-wing enthusiasts to form and act on these international relations. Furthermore, President Nixon is responsible for the My Lai Massacre cover up. On March 16th 1968, a squad of Us soldiers mercilessly killed between 200 and 500 unarmed civilians at My Lai, a small village near the north coast of Southern Vietnam. My Lai was successfully covered up by US commanding officials in Vietnam for well over a year. Nixon, even prior to Watergate was the main culprit in yet another crime, in this case a crime of humanity, one that could have led to his impeachment. In hindsight it is now apparent that the President initiated the corruptive actions against the trials of those found guilty at My Lai – so that no US solider would be convicted of War Crimes (History.com, 2010).

Finally, we reach the crown Jewel of Nixon’s presidency; Watergate. Just as Clinton is associated with Lewinsky, Kennedy with Oswald, Lincoln with Slavery, Obama with Bin Laden and Nixon with Watergate. Nixon in his second term became ruthless with his domestic opponents, he withheld grants and funding appropriated by congress, he often sought to withhold information from congress; Nixon was denied an injunction to prevent the publication of the Pentagon Papers and then later during the Watergate crisis, was forced to release tapes of recording from the White House (Mervin, 1992, p.99). On top of this, he allowed secret missions to spy on his political opponents, this included tapping phones and harassing the liberal brookings institution. This is how the Watergate scandal occurred – initiated by a break-in at the democratic party’s headquarters and followed by a presidential cover up. Eventually bringing to resign in 1974, before he could be prosecuted. The severity of Watergate has been played down in the aftermath of it all, Nixon himself justified it in the worse possible way, that no one in government made financial profit from Watergate (Crowley, 1996, p.215), in this case, Nixon compares his behaviour to previous presidents such as JFK and even, Presidents after him like Clinton. He is very critical of both Executives as he feels Kennedy was just as corrupt during the Bay of Pigs affair. Principally, JFK had not been in effect long enough for anything to take place. The Cuban Missile Crisis was corrupted by Kennedy’s Administration, and the released transcripts were sanitized and passages removed – very similar to what Nixon had done with the Watergate tapes. Clinton, was also a sore topic for Nixon as he had been able to get away with Whitewater. In later years, Nixon felt that he was unfair penalised for Watergate as Clinton was able to evade the repercussions of Whitewater. ‘ Watergate was wrong; Whitewater is wrong. I Paid the price; clinton should pay the price. Our people shouldn’t let this issue go down. They shouldn’t let it sink.’ (Crowley, 1996, p. 219). This was a reference to those who wouldn’t let Nixon forget Watergate and what he had caused. Nixons final comment on Clinton were to have whitewater pursued and Clinton held responsible to what extent was necessary – it would be easy to see how Nixon resented Clinton for his indiscretions, many of which he was able to evade the consquences. Watergate had shattered the liberal consensus, Americans had learned of the covert operations and dirty tricks that their secret warriors had carried out at the height of the Cold War. Following this, the American people had learned about the murderous plots, drug testing, and harrassment of dissesedents that had been carried out in their name. They had been taught a very diluted version of the World. The intelligence investigations forced Americans to face difficult questions regarding the competence of their intelligence agencies, the Executive office of Government, and the tensions between secrecy and democracy. The many inquiries asked them to doubt the decency of Americans they believed to be heroes such as J.Edgar Hoover and John F. Kennedy- and whether their nation truly adhered to it’s professed ideals. It can ultimately be determined that the failures of the American political system-true or false, have undermine trust in the American people.

In conclusion, At the beginning of Nixon’s Presidency it is likely that events in every single Presidency would have added to the suspicion of that Office, Watergate would have had an significant impact on American trust in government. Most Americans are more likely to include factors like Vietnam and Watergate when regarding Nixon as both fit well into the decline of trust, and increasingly negative perceptions of American Political leaders. However, it would be an unfair to put too much emphasis on the incompetence and dishonesty of various presidents and members of congress. Many believed that Ford would restore faith in the Office of President, and trust in the government. Ford was everything Nixon wasn’t: Honest and Open and he received an 71% approval rating shortly after he was sworn into office. – However, in his inaugural address, incoming President Gerald R. Ford declared, “Our long national nightmare is over.” A month later, he granted Richard Nixon a full pardon, by doing this Ford damaged the American optimism, and had shown that he had more loyalty to Nixon and his Party than to the American People. This increased the growing trend of cynicism about the office of the President even after Nixon.

2016-12-15-1481771259

Sexual harrassment of women at the workplace

Introduction

Sexual harassment of women at workplace is a type of violence against women on the basis of their gender. It violates a woman’s self esteem, self-respect and dignity and takes away her basic human as well as Constitutional rights. Sexual Harassment is not a new phenomenon and speedily changing workplace equations have brought this hidden reality to the surface. Sexual Harassment at the workplace has become ubiquitous in every part of the world and India is no exception to the same.

Like any other sex based crime Sexual harassment of the women is about power relationships, domination and control. It is not what most people commonly tend to think like verbal comments, inappropriate touch or sexual assault. It has myriad ways and forms. Moreover its new forms or variables are being introduced every other day in this dynamic technological era. It may include derogatory looks, gestures, indecent proposals, writings or display of sexually graphic pictures, sms or mms, comments about ones dressings, body or behavior and any other unwelcome or objectionable remark or inappropriate conduct.

The ultimate aim of the makers of the Constitution of India was to have a welfare state and an egalitarian society projecting the aims and aspirations of the people of India. The Preamble, Article 14, Article 15, Article 16, Article 19, Article 21, Directive Principles of State Policy and many other Articles have secured social justice for the women thus preventing sexual crimes.

Before ‘The Sexual Harassment of Women at Workplace (Prevention, Prohibition and Redressal) Act 2013 came into force, legislation such as Indian penal Code,1860, The Code of Criminal Procedure,1973, The Indian Evidence Act,1872 have provided protection to women. Also various international conventions to which India has been a signatory and has ratified have filled up the gap until 2013.

The recent Sexual Harassment Act has its roots in the ghastly rxxe of a community worker Bhawari Devi in rural Rajasthan. This incident and the humiliation that followed made the apathy of the system evident. Several women’s groups filed a Public Interest Litigation in the Supreme Court based on which the Vishakha Guidelines were formulated to prohibit the sexual harassment of women at the workplace. Various other judicial pronouncements also paved a way for the formulation of legislation of 2013.

Sexual Harassment is often about inferiority of women. The victim is often confused, embarrassed or scared. She may be clueless about with whom to share with the experience and whom to confide in. Sexual harassment at the workplace may have serious consequences on the physical and mental well-being of the women. It may also degenerate to their gravest form that is rxxe.

There should be proper grievance mechanism at the workplaces to deal with this issue. Also the accused shall be punished without having any regard to their status or position in at the workplace. There should be committees comprising especially of women members to make the victims feel comfortable. Reporting of The incidents should be encouraged and those who dare to speak up must be protected from the wrath of the employers. The employed women cannot be at the whim and fancy of their male employer. The incidents of Sexual Harassment at the workplace are a stigma on our Constitution. If it is not prevented, our constitutional ideals of gender equality and social justice will never be realized.

Rationale

Sexual harassment can have a number of serious consequences for both the victim and his or her co-workers. The effects of sexual harassment vary from person to person and are often dependent on the severity and duration of the harassment. For many victims of sexual harassment, the aftermath may be more damaging than the original harassment. Effects can vary from external effects, such as retaliation, backlash, or victim blaming to internal effects, such as depression, anxiety, or feelings of shame and/or betrayal. Depending on the victim’s experience, these effects can vary from mild to severe.

The rationale behind taking this topic for the dissertation is to throw light on the various aspects of the law relating to sexual harassment thereby helping women to achieve their rights better. Also one of the reasons behind choosing this topic for the dissertation is also to make the employer aware of his liability. Lastly our Constitution has granted us certain fundamental rights and it includes gender equality and social justice. There is a strong relationship between these fundamental rights and the prohibition of sexual harassment at the workplace as sexual harassment is a form of power relationship which treats women inferior.

Scope

Sexual harassment at the workplace results from the misuse of power and not from sexual attraction. The legal scholars and jurists have emphasized that the instant conduct is objectionable as it does not only interfere with the personal life of the victim but also throws a pall on the victims’ abilities.

The victims of sexual harassment may be both men as well as women. This study particularly aims at the sexual harassment of women at the workplace.

The scope of this study is to pave way for the prevention of sexual harassment at the workplace and to make women aware of their rights and complaint mechanisms. Many women are ignorant about the laws which protect them from this kind of harassment. Also many employers shrug off their responsibility to help fight sexual harassment at the workplace. This study aims to discuss the constitutional provisions as well as legislation and employers liability in eradicating sexual harassment at the workplace.

Background

The law to check sexual harassment at workplace which prescribes strict punishment, including termination of service, for the the guilty and similar penalties in case of a frivolous complaint has come into effect from Monday.

The women and child development (WCD) ministry had come under attack for delay in implementing the Sexual Harassment of Women at Workplace (Prevention, Prohibition and Redressal) Act, 2013 which was brought in after the outrage over the December 16 gang rxxe case, despite the fact that it had received Presidential assent on April 22, 2013. Before this Act came into force there was no special law for sexual harassment at the workplace. Some legislations like Indian Penal Code, Criminal Procedure Code etc dealt with this problem.

After the horrifying gang rxxe of Bhawari Devi Vishakha guidelines were formulated to fill the gap. In the year 2013 the Act came into force. Thus this study will help to study how the laws have evolved.

Hypothesis

Like all other historical manifestations of violence, Sexual Harassment is embedded in the socio economic and political context of power relations. It is produced within class, caste and social relations in which male power dominates. It is a sex based discrimination which is dangerous to the well being of the woman. It imposes less favourable conditions upon them. This research tries to build a link between “Every incident of sexual harassment of women at workplace results in violation of the fundamental rights of women and it is the employer’s liability to protect these fundamental rights”.

Research Methodology

Research is a systematic attempt to push back the bonds of comprehension and seek beyond the horizons of our knowledge, some truth or reality. Since the scope of the study is to establish a link between the fundamental rights and a right against sexual harassment at the workplace. The research Methodology chosen will be doctrinal and will seek to elaborate all aspects and get deep knowledge.

The study material will be collected through library visits , various books, periodicals, articles published etc. The technology like computer Cds etc will also be used to obtain and maintain information. Reliable Internet resources will also be used to a limited extent.

Survey of existing literature

The researcher would like to analyze and survey on the various books available on sexual harassment at the workplace, Indian Constitution and various books on the rights of women and their protection.

Aims and Objectives

The main Aims and Objectives to undertake this research can be listed as follows-

● To outline the relationship between fundamental rights and the right against sexual harassment at the workplace.

● To make women aware that right against sexual harassment at the workplace.

is their fundamental and constitutional right.

● To make women aware of the laws and the policies for sexual harassment at the workplace.

● To highlight the liability of the employer to keep the sexual harassment at the workplace in check as protection of women against the sexual harassment is a constitutional and fundamental right.

● To search solutions for the persisting problem of sexual harassment at the workplace.

● To understand the evolution of laws against sexual harassment at the workplace.

● To study the legal facets in protection of the rights of the women.

● To study the theme of legislations and laws which are enacted to prevent sexual harassment at the workplace

Scope

Sexual harassment at the workplace is a serious and ever increasing problem in India. India already has one of the lowest ratios of working women in the world. It would be disastrous if companies, unclear about sexual harassment, take the easy way out by simply rejecting women in favour of men.

It is the liability of the employer to make use of the constitutional articles and the new legislation to protect women against the sexual harassment at the workplace. Sexual harassment in the workplace is one of the most complicated areas of employment law. It is also one of the areas that has recently received the most press. Sexual harassment in the workplace often goes hand-in-hand with other illegal acts, like gender discrimination.

CHAPTERISATION

The Research project is divided into the following 10 chapters for better understanding. The chapters are further divided into subpoints so that the material collected and the study done can be compartmentalized into chapters and sub-chapters. This chapterisation will be able to give a better idea and a better insight into the project. The chapters are systematically numbered and placed one after the other.

Chapter – I – Introduction

Though the Constitutional Commitments of the Nation to women were translated through various planning processes, legislations, policies and programs over the last six decades, a situational analysis of social and economic status of women does not reflect satisfactory achievements in any of the important human development indicators. This chapter will highlight the vulnerable group and how the sexual harassment at the workplace speaks to power relationships and victimization than it does to sex itself. Also how sexual harassment is a form of sexual discrimination and subordination.

Chapter – II – Extent and Types of Sexual harassment

This chapter will enumerate the extent of sexual harassment at the workplace specially in India. It will also speak about the types of sexual harassment at the workplace which includes 1) Quid Pro Quo i.e “this for that” which means the employer or the superior at work makes tangible job related consequences such as promises of promotion, higher pay etc. upon obtaining sexual favours from an employee and 2) Hostile Work Environment which means an abusive working environment.

Chapter – III – Analysis of Statistical Data

In this chapter statistical data will be collected from reliable sources. The data will be analyzed and proper conclusions will be arrived at. This chapter will show the numbers and may show the gravity of the problem.

Chapter – IV – Vishakha Guidelines

Till Vishakha guidelines there was no civil or penal laws in India to protect women from sexual harassment at the workplace. The brutal gangrxxe of Bhawari Devi gave rise to Vishakha Guidelines which filled up the vacuum. This chapter will cover up the historical background behind Vishakha guidelines and important features of the same. Vishakha Guidelines began a new era in the legislations for c

Chapter – V – Judicial pronouncements

The issue of sexual harassment at the workplace is such a complex issues that a simple understanding of it is a tedious and tardy process. Therefore the best way to understand it is to see the trends in the history of the precedents of the Courts. The famous cases of Vishakha, Rupan Deol Bajaj, Shehnaz Mudhbalkhal, Medha Kotwal Lele’s Case will be covered in this chapter. Also recent cases of Tarun Tejpal and Justice AK Ganguly will be studied in detail. This chapter will trace the judicial inclination of the decisions.

Chapter – V I – Legal Framework in India-The Constitution

The Constitution of India gives equal protection to men and women. Gender equality is one of the ideals enshrined in our Constitution. The Constitution has even positively discriminated in favor of women. In this chapter various Articles in the Constitution will be discussed which include Art 14, Art 15, Art. 21 and many other Articles which ensure protection to women. The Constitution is the mother of all Laws and hence all other legislations have emanated from this. Thus this chapter will be important as it will cover how Constitutional ideals and Fundamental Rights enshrined in it have given rise to various laws protecting women.

Chapter – VII – Legal framework in India- Criminal, Labour and other laws

The Sexual Harassment of Women at Workplace (Prevention, Prohibition and Redressal) Act, 2013 has come into force in 2013. Before this various other criminal and other laws protected women from sexual harassment at the workplace.

Thus in this chapter all these other laws which have sourced the formation of the Act of 2013 will be discussed in detail. Also how the Constitution and these other laws have aided the formation of the Act of 2013 will be discussed.

Chapter – VIII – The Liability of the Employer

This is one of the important chapters as it will discuss how the Employer must take care to prevent the occurrences of the incidents of sexual harassment at the workplace in his institution. Also it will depict how the Employer can make use of the laws and the Act of 2013 to ensure that the incidents do not occur and if the incidents occur then how does one tackle with it, legally and otherwise.

The Employer should create a healthy environment at the workplace and the accused should be made subject to the laws irrespective of their positions in the institutions.

Chapter – IX – An Analysis of the Act

In this chapter, The Sexual Harassment of the Women at the Workplace (Prevention, Prohibition and Redressal ) Act,2013 will be discussed in detail. Its objectives , Comp;aint procedures, Inquiry, compensation, Punishments etc, such features will be enumerated. The Act is the most important tool for battling sexual harassment at the workplace. Thus this chapter will show how this Act can be best used to the benefit of the women at the workplace.

This Act is a result of lot of struggle and wait. This Act should be used in such a manner that it will prevent as well as eradicate sexual harassment at the workplace.

Chapter – X – sexual harassment at the workplace- International Scenario

India has been a signatory and has ratified many international conventions which give special rights and protection to the women. Its obligatory on India’s part to ensure that women are protected equally. In this chapter various international conventions like CEDAW etc will be discussed in detail.

Also this chapter will discuss how these international enactments have acted as a source for the legislations in India.

Chapter –X I – Sexual Harassment at the Workplace prevention policies

This Chapter will enumerate how the prevention policies must be formulated and how the policies must be best used to prevent sexual harassment at the workplace.

Chapter – XII – Conclusion

This chapter will contain conclusions drawn upon the findings of the research. The conclusion is the most important part of the research as it sums up the hwole of the research and gives a good insight into it.

The protection of the rights of the women in India has always been upheld by the Indian Constitution and Law makers. Women are given a place of dignity in all the legislations. Since in India women were suppressed since the ancient times, Legislators took special care to involve and protect women in the mainstream world.

The conclusion at this stage can be derived that women in India are protected by Constitutional and Fundamental rights. The other legislations have their source from the constitution. Thus it is a Constitutional mandate to protect women from the sexual harassment at the workplace.

It is the duty and responsibility of the employer to uphold the rights of the women in his or her institution. Also the recent Act of 2013 must be implemented properly.

Chapter – XIII – Suggestions

This chapter will contain the suggestions for the victims as well as the employer. It will also contain suggestion for prevention policies and duties of the employer. The suggestions will include how one can strike the balance between constitutional rights and rights of women.

2019-1-5-1546672284

Structure and issues of race within the international system/relations

“The problem of the twentieth century is the problem of the color line.” (Du Bois, n.d.)

Race has been at the epicenter of everything and it propagated throughout centuries in several forms; forms such as economics, geography, education, health and also, socio-politics. This essay discusses and explains the structure and issues of race within the international system/relations; its evolution and development, how it impacts nations and its populations, and nonetheless the elements of race and colonialism in structures of power; followed by the formation of the successful and long-lasting Eurocentric modern capitalism, which is still present in society and acts as a pattern of global hegemon (LeMelle, 1972).

As a definition of a grouping of humans by analyzing their characteristics, either physical or ethnic (En.wikipedia.org, 2018), or a contribution to and a product of stratification (LeMelle, 1972), race has conditioned and influenced many people on the globe, including its governance and leadership. From a socio-constructed conception (En.wikipedia.org, 2018) to a major and predominant constraint in the global order and politics, the ‘race’ denotation itself evolved, building a hierarchy between civilians and nations across the world: White vs Black, Asian and other ethnic groups (BAME). This division led to a different approach of how human beings perceived themselves, strengthening aspects such as levels of development, civilization values, history, religion, culture and traditions, physical features, garments and mainly, color (Jacques, 2003).

Race portrayed and still portrays a significant role in the world order; With its hierarchy status more than solidified – claiming whites as the dominant class and non-whites as the subordinates-, it easily breads racism, discrimination, inequality, and conflict too, perpetuating the ideology of a ‘White Man’s World’ (LeMelle, 1972).

This expression was implemented and widely spread by Europeans with the intent to classify and divide populations according to their ethnicity and backgrounds. With voyages of discovery, colonialism, slavery, and imperialism perceived as great sources of income and prosperity, it became easier for Europeans to strive with their sense of white supremacy and go beyond borders to achieve these hateful and money-driven causes (LeMelle, 1972). Even though, colonialism and slavery are over per se the disparity amongst people, in modern society, is overwhelmingly large (Nkrumah, 1965); race is a structure that conditions and influences the power and actions of actors in the international realm and, in fact, remains beneficial for the Transatlantic couple, the EU and the US. It is then obvious that South American, Asian and African civilizations were reduced solely to their post-colonial and inferior identities and that the conception of a modern and civilized Europe still lingers powerful and wealthy, alienating others from their participation in historical, cultural and financial donations onto the international system (SHILLIAM, 2011).

So how can this term have a credible structure of correlation and such an effect on the international system as we know it? The three antecedents, mentioned priorly, generated a meaningful advantage to the Transatlantic couple in regards to the rest of the globe – Big decision makers with IGOs such as UN, WTO, IMF, NATO or World Bank; leaders of renown institutions such as banks, universities and hospitals; predominant winners of warfare in events like WWI, WII and Cold War; huge influencers in aspects of culture, law, democracy, science, technology, engineering, religion and immigration and responsible for the rise of capitalism and the role on globalization too. The Transatlantic couple is then easily seen as the global hegemon and the face of the international system without any accountability that their strength and mighty development was built on the back and discreditation of the BAME population ().

What also perpetuates widely the state of devaluation that the non-white nations currently find themselves in is the lack of international opportunity and mobility, poverty and debt crisis, uneven development rates with a rapid population growth to a bad wealth distribution, disproportionate citizenship status, high migration flow (LeMelle,1972), neo-colonialism and dependency on Western states, unbalanced of life chances and success by race and by state/region, and a white privileged global society.

2019-1-17-1547693961

Effects the murder of Stephen Lawrence on policing procedures: custom essay help

This essay will analyse the effects the murder of Stephen Lawrence, which lead to the Macpherson report had on changes in police procedures and policy; especially concerning ethnic communities. The report itself includes seventy proposals of recommendation to tackle racism in the police force; with the race relations legislation being an important policy to improving procedures as well as an investigation into the Metropolitan police force for institutional racism and the failures regarding procedure surrounding the Stephen Lawrence case (The Guardian, 1999).

The murder of eighteen-year-old Stephen Lawrence occurred 22nd April 1993 where the young man was stabbed resulting in his death; although, it was not until January 2012 that two individuals were found guilty of his murder (BBC, 2018). The Macpherson report that followed the murder outlined changes in practice this included the “abolishment of the double jeopardy rule”; before this law was abolished an individual could not be tried again for a crime they had previously been found not guilty for, this was a vital change in policy as it led to the conviction of the individuals found guilty of Stephen Lawrence’s murder (The Guardian, 2013). This proposes a positive effect on policing procedures as cases succeeding the Stephen Lawrence case may have not achieved a conviction had the double jeopardy rule not of been abolished. This is evident in the ‘Babes in The Woods’ murder as this change in policy had a positive effect on policing procedures as the police were able to use new forensic evidence thirty-two years later to convict the murderer of the two young victims (BBC, 2018).

Bowling and Phillips (2002, cited in Newburn, 2017, p.854) suggested that the recommendations that were outlined in the Macpherson report led “to the most extensive programme of reform in the history of the relationship between the police and ethnic minority communities.” This suggests a positive effect of the Macpherson report as due to some of the changes in policy and police procedures actioned by the recommendations outlined has meant that the police have begun to regain the trust of ethnic minority communities; which will support police practice in the future.

The 2009 Home Affairs committee report written ten years after the Macpherson publication highlights if and how the seventy improvement proposals outlined in the report had been met at time of publication; the Home Affairs report highlighted that Dwayne Brooks suggested an important area for progression was the “introduction of appropriately trained family liason officers in critical incident” (Parliament, 2009). The report highlights that this key improvement in police procedure surrounding appropriate training for family liason officers to deal with critical incidents has improved family liason officer’s ability to be able to ‘maintain relationships with families’, whilst obtaining necessary evidence and improving confidence in the police within the black community (Parliament, 2009). Thus, suggesting that this change in policing procedure and policy, due to the Macpherson report, has had a positive effect, especially within the ethnic community. The report also highlights that this change in policing procedure and policy has positively affected homicide detection rates which the report indicated at 90%, which is “the highest of any large city in the world” (Parliament,2009).

However, there are still issues surrounding police procedures especially within ethnic minority communities, in which the Macpherson report improvements may not have positively been actioned. This can be seen in the stop and search rates; policing statistics published by the government for the time period of 2016/2017 suggests that the ratio for white individuals stopped and searched was 4:1000 whereas the ratio for black individuals was 29:1000 (Gov, 2018). Thus, suggesting that police are stopping more black individuals that white, which may still suggest an element of institutional racism in the way police conduct this procedure.

The prison reform trust also highlights an over representation of Black minority ethnic groups (BMES) in prisons with the supporting evidence of the Lammy review, the reform trust suggests that there is a clear correlation between the ethnicity of an individual and custodial sentences being issued (Prison reform trust, 2019). Thus, suggesting discrimination in police procedures and the court system. Therefore, this may suggest that the Macpherson reports improvements have not positively been actioned in some elements of the criminal justice system.

In conclusion, where the recommendations have been put into place and are actively being worked upon, the Macpherson report has provided positive effects on police procedures and policy. However, evidence such as the stop and search statistics shows that there are still issues in policing procedures and policy that need to be addressed.

2019-1-17-1547744694

Boohoo marketing and communication (PESTEL, SWOT)

This report will focus on a three-year Marketing Strategy Plan for Boohoo and a one-year Communication Plan to explore what improvement Boohoo can make across their shopping experience, their site and their social media to drive sales in the UK market.

Methodology

Primary research

For my primary research I made a questionnaire on Survey Monkey to find out customer experience with brand Boohoo. The sample of people used in the primary research ranged from the ages of 16 – 25 who are the main consumers Boohoo targets. Secondary research was carried out through websites such as WGSN.

Brand History

Boohoo is a UK online fashion retailer. It was founded in 2006 by Mahmud Kamani and Carol Kane. The brand specifies in its own brand fashion clothing selling over 36,000 products, including accessories, clothing, footwear and health and beauty. Boohoo also run BoohooMAN, NastyGal and PrettyLittleThing and are all targeted at 16-24 year olds.

Mission statement

Here at boohoo we are very proud of our brand and what we have achieved. Day to day we live by four key values that help us to continue to succeed and are at the heart of everything we do. This is our PACT, the values that seal the deal for boohoo..

The key issues boohoo face are that there are many retailers out there are all very similar including Miss Guided and Pretty Little Thing.

Maco/micro trends

Macro Trends

Political factors

Wage legislation – minimum wage and overtime
Work week regulations in retail
Product labelling
Taxation – tax rates and incentives
Mandatory employee benefits

Economic factors

Exchange rates
Labour costs
Economic growth rate
Unemployment rate
Interest rates
Inflation rates
Education level in the economy

Social factors

Class structure/hierarchy
Power structure in the society
Leisure interests
Attitudes (health, environmental consciousness)
Demographics and skill level of the population

Technological factors

Recent technological developments by Boohoo competitors
Impact on cost structure in Retail industry
Rate of technological diffusion
Technology’s impact on product offering

Environmental factors

Climate change
Weather
Recycling
Air and water pollution regulation in Retail Industry
Waste management in consumer services sector

Legal factors

Copyright
Data protection
Employment law
Health and safety law
Discrimination law

SWOT

Strengths

Strong distribution network – Boohoo have built a trustworthy distribution network which is able to reach most of its potential market
New markets – Boohoo has been entering new markets and making success of them such as BoohooMAN and PrettyLittleThing. The development has helped Boohoo build a new revenue.
Good returns on capital expenditure – Boohoo have made good returns on capital expenditure by making new revenue streams.
Reliable suppliers – Boohoo has a strong reliable supplier of raw material therefore enabling the company to overcome any supply chain holdups

Weaknesses

Investments in new technologies – The scale of expansion and geographies that Boohoo are planning to expand into, they will need to put more money into technology as the investment In technologies is not at balance with the vision of the company.
The profitability ratio and net contribution % of Boohoo are below the industry average.
Global competition such as Missguided, Topshop, Asos and H&M
No flagship stores

Opportunities

Opening up of new markets because of government agreement – the approval of new technology standard and government free trade agreement has provided Boohoo an opportunity to enter a new emerging market
Lower inflation rate – the low inflation rate brings more stability in the market and enable credit at lower interest to Boohoo customers
New technology gives Boohoo an opportunity to maintain its loyal customers with great service and lure new customers through other value positioned plans.
Continue using celebrity endorsements
Creating an online chat on their website that allows customers to receive 24-hour help.

Threats

Poor quality products compared to Boohoo competitors
Increased competition within the industry
Technological developments by competitors – new technological developments by competitors pose a threat towards Boohoo as customers attracted to this new technology can be lost to competitors which will decrease Boohoo overall market share.

Competitor analysis

The retail industry face a strong competition as Boohoo have many competitors such as ASOS, Missguided, New Look and H&M. A way that most consumers experiment all different trends with different brands is to choose cheaper retailers which in this case would be Boohoo.

2019-1-18-1547810074

Why do employees leave organisations? / Can a business force an employee to retire?: college essay help

Some of the reasons employees leave organisations are due to poor culture, poor work life balance, need for greater flexibility, poor management vs employee relationship, lack of communication, poor pay, no room for growth and poor working conditions, However, employees will stay with an organisation if there is a good work life balance, a sense of reward, good benefits packages, competitive salaries, fun environment to work, recognition and a financial need.

Peter Cheese, chief executive of the Chartered Institute of Personnel and Development, said: “It definitely takes time to get a new employee up to speed. It depends on the nature of the job; on one end of the spectrum, somewhere like McDonald’s can get new employees up to speed very quickly. On the other hand, there is a business development person in a professional development organisation where you’ve got to spend quite some time understanding the network and building connections to the client base and so forth, then three to six months is probably fairly typical” (Replacing Staff Cost)

It’s important to understand the reasons why employees wish to leave an organisation as there are costs associated with a dysfunctional employee turnover, these costs may not only be financial but can also be Intrinsic and reputational. Intrinsic knowledge loss is difficult to measure, but would be a loss either way, if an employee brought clients to the business or has built fantastic relationship with clients whilst employed, that employee leaving the business would be detrimental. Reputational can cost the business immensely, if an organisation doesn’t treat their employees or ex-employees well and this becomes knowledgeable, it can be hard for the organisation to attract good talent and clients. According to an article in the telegraph (Financial) Financial costs in replacing staff can cost up to £4billion a year, that’s an average of £30k per person.

One method for retaining talent in an organisation is to ensure there is an open and inclusive culture which promotes communication. One way of doing this could be to ensure language used by the senior team inclusive of HR is as Lucy Adams say “human”. (Human) Adams surmises that often when we use jargon the company can end up creating a distance between themselves and the employee; whereas if we converse in a human approach using everyday language, we have the opportunity to create a more cohesive working team, where employees can feel involved in dialogue which could leader to greater engagement but encouraging a more human approach.

Of course, “jargon” has come about due to both a cultural need for some departments such as HR to retain a professional and non-committal distance. For example, if HR as an advisory agent apologised directly for offence caused to an employee, there could be ramifications later through allegations of admittance in sensitive situations.

Therefore, it’s vital that when encouraging engagement though treating employees as humans and not numbers that due consideration is provided to the wider ramifications of a change in language and method of communication should also be considered such as platform, surveys and daily briefings.

Another method employed by businesses is the approach of ensuring employees receive a greater work life balance. This can be done through several mediums:

Flexible Working (statutory and company culture)
Seeing the individual a whole
Introduction of TOIL – for additional hours worked
Implementing training for time management, According to HR Review (Poor TM) its “One of the biggest causes of stress in the workplace is poor time management”
Increased leave benefits (holiday, paternity, maternity) Maternity for example According to Glass door (Women) Accenture pay, 9 months full pay

Such promotions can clearly be open to abuse and this can be a possible downside.

—-

Under the Equality Act 2010 [Equ Act2010], The Advisory, Conciliation and Arbitration Service (Acas) stated that when managing retirement [Retirement], older workers can voluntarily retire at a time they choose and draw any occupational pension they are entitled to. However, employers cannot force employees to retire or set a retirement age unless it can be objectively justified as what the law terms ‘a proportionate means of achieving a legitimate aim – [Please see appendix 8], Acas have said that a direct question such as ‘when are you retiring’ should be avoiding, instead open ended questions such as ‘where they see them selves in a few years and there contribution to the organisation, this could be done during a performance development review. An employee can change their mind at anytime about retiring until they have handed in their formal notice

An employer cannot compulsorily retire an employee, as this would leave the employer open to a complaint of unfair dismissal.When managing a dismissal, ACAS states [Dismissal 2019 ] its always best to try and resolve any issues informally first.

According the Employment Rights Act 1996 [ERA 1996] employees have the right not to be unfairly dismissed, companies need to set out clear rules and procedures and act consistently when handling disciplinary procedures and to ensure employees and managers understand the procedures and rules.

One of the following reasons along with a fair procedure, an employee can be fairly dismissed; capability (including the inability to perform competently) redundancy, conduct or behaviour, breach of a statutory restriction (such as employing someone illegally) or some other substantial reason (such as a restructure that is not a redundancy).

Before holding a disciplinary hearing, an investigation should be carried out and the employee given any evidence in time to prepare for the meeting, the employee should also be given the opportunity to bring a trade union rep or a colleague, although they can’t answer any questions, they can ask them.

The employee should be given opportunity to share their side of the situation and challenge evidence.

If the disciplinary is based on performance, the employee should be given support and training and an opportunity to improve, Companies should not sack employees for a first offence unless its gross misconduct and a penalty should reflect the seriousness of the act, staff can usually appeal against Verbal, written 1st and final warnings.

If an Employee has been with the company less than two years, they do not have unfair dismissal rights, with exceptions around discrimination and equality.

CIPD tells us that [redundancy CIPD] redundancy is a special form of dismissal which happens when an employer needs to reduce the size of its workforce. An employee is dismissed for redundancy if the following conditions are satisfied:

the employer has ceased, or intends to cease, continuing the business, or
the requirements for employees to perform work of a specific type, or to conduct it at the location in which they are employed, has ceased or diminished, or is expected to do so.

If there is a genuine redundancy, employers that follow the correct procedure will be liable for:

a redundancy payment, and
notice period payment.

Employers don’t follow the correct procedure may be liable for unfair dismissal claims or protective awards. Redundancy legislation is complex and is covered by statute and case law, with both determining employers’ obligations and employees’ rights.

2019-1-18-1547854403

Should we fight against tort reform?: essay help site:edu

The controversy around tort reform has turned into a two-sided debate between citizens and corporates. With the examination of various cases in recent years, it is clear that the effects of tort reform have proven to be negative for both sides. This issue continues to exist today, as public relations and legislature show a clear difference in opinion. In the event that tort reform occurs, victims and plaintiffs will be prevented from being fully replenished from the harm and negativity that they suffered, making this process of the civil justice system unfair.

In the justice system, there are two forms of law: criminal law, and civil law. The most well known form of law is probably criminal law. Criminal law is where the government (prosecutor) fights a defendant regarding a crime that may or may not have been committed. Contrary to this, civil law has a plaintiff and a defendant who fight over a tort. As stated in the dictionary, a tort is “a wrongful act or an infringement of a right (other than under contract) leading to civil legal liability”. In hindsence, a tort correlates to that of a crime in a criminal case.

Tort reform refers to the passing legislature or when a court issues a ruling that limits in some way the rights of an injured person to seek compensation from the person who caused the accident (“The Problems…Reform”). Tort reform also includes subtopics such as public relations campaign, caps on damages, judicial elections, and mandatory arbitration. Lawmakers across the United States have been heavily involved with tort reform since the 1950s, and it has only grown in popularity since then. Ex-president George W. Bush urged Congress to make reform in 2005 and brought tort reform to the table like no other president.

The damages that are often referred to in civil lawsuits are economic damages and non-economic damages. An economic damage is any cost that is a result of the defendant’s actions. For example, medical bills or money to repair things. Non-economic damages refer to emotional stress, post-traumatic stress disorder, and other impacts not related to money. A cap on damages “limits the amount of non-economic damage compensation that can be awarded to a plaintiff” (US Legal Inc).

Caps on damages are the most common practice of tort reform. In New Mexico, Susan Seibert says that she was hospitalized for more than nine months because of a doctor messing up during her gynecological procedure. After suing, she was supposed to receive $2.6 million in damages, which was then reduced to $600,000 because of a cap on damages. Seibert still suffers from excessive amounts of debt as a result of not being given the proper amount of money that she deserved. Caps on damages highly impacts the plaintiffs in a case. As priorly mentioned, plaintiffs sue because they need money in order to fully recover from the hardship in which they endured as a result of the defendants actions.

A type of tort reform that is not as well known is specialized medical courts. Currently, all medical malpractice courts have juries that have little to no background regarding medical information. This has been working very well because it means that an unbiased verdict is decided. However, the organization Common Good is trying to pass the creation of special medical courts. In this, the jury and judge will be trained medical professionals who will deeply evaluate the case. Advocates for this court feel that people will be better compensated for what they really deserve. However, the majority of the opinions on this court are against the idea of ths. The most concluded opinion of those who oppose this new system believe that it would put the patients at a disadvantage. It is more likely that the trained medical judges and juries will side with the doctor/surgeon/defendant than siding with the plaintiff. They believe that the most fair and efficient way to judge medical malpractice cases would be to use the existing civil justice system. One of the most famous medical malpractice cases involving Dana Carvey was ended in a settlement, but could have been much worse for Carvery if the judge and jury had been medical professionals. Carvey was receiving a double bypass and had a surgeon that operated on the wrong artery. In the event that this case went to a medical court, it is easily predictable that the verdict would have been that the doctor made a “just” mistake. The jury would have said that this mistake was nothing that was easily preventable, and it was something that could have been assumed as a risk going into the surgery. However, this case did not go to court, rather, it ended in a $7.5 million settlement.

Another form of tort reform is mandatory arbitration. Mandatory arbitration, as said in the article, “Mandatory Arbitration Agreements in Employment Contracts”, is “a contract clause that prevents a conflict from going to a judicial court”. This has affected many employers who have experienced sexual harassment, stealing of wages, racial discrimination, and more. Often times, “employees signed so-called mandatory arbitration agreements that are the new normal in American workplaces” (Campbell). These agreements are found under stacks of thousands of papers that have to be signed throughout the hiring process. The manager will force the new employee to sign these documents. Most of the time, these documents will not be called “Mandatory Arbitration Agreement”, rather, they could be called legalese names like “Alternative Dispute Resolution Agreement” (Campbell). “Between employee and employer, this means that any conflict must be solved through arbitration” (“Mandatory Arbitration Agreements in Employment Contracts”). When a conflict is solved through arbitration, “neutral arbiters” go through the evidence that the company/client present, and those arbiters decide what they think the just outcome should be, whether that is money, loss of a job, and more. This decision is known to be called the arbitration award.

A place where the effects of mandatory arbitration can be seen is the #MeToo movement. With the rise of this moment, more and more women have been coming out about their experiences with sexual harassment in the workplace. These women are then encouraged to fight against their harasser. Ultimately, many of these woman find out that they are not allowed to sue because of the mandatory arbitration agreements that they signed during the process of being hired into the job. In fact, Debra S. Katz wrote an article for The Washington Post called “30 million women can’t sue their employer over harassment”, proving how widespread the issue is. Evidently, this form of tort reform ruins the lives of over 30 million people annually. These woman could be suffering from post traumatic stress disorder, truma, and more from their experiences with sexual harassment. In the event that this form of tort reform is not banished, more and more woman will be suffering from mandatory arbitration.

By limiting the amount of money and reparations that a defendant will have to pay a plaintiff, tort reforms benefit major corporations. However, on the opposite side of this, the plaintiff suffers extremely from these limitations. In many cases, a plaintiff will be suing because they need the money to recover fully from the event that took place. For example, in the documentary “Hot Coffee”, many tort cases were discussed. Throughout the cases, there were occurrences in which the plaintiff suffered from the current regulations regarding caps, mandatory arbitration, and more. Tort reform would further exacerbate the negatives of modern day civil court cases.

Groups such as the American Tort Reform Association (ATRA) and Citizens Against Law Abuse (CALA) have also been active in fighting for tort reform. Along with these suspicions, other issues with tort reform such as the fairness behind caps on damages have exposed inequity in the civil justice system. Supporters of tort reform have been rallying for a common goal: to limit the ability of citizens to take advantage of the litigation process to protect businesses and companies.

Victims and plaintiffs will be prevented from receiving the reparations that they deserve as a result of hardship, negativity, and suffrage from the defendant’s actions in the event that tort reform occurs. Caps on damages, special medical malpractice courts, and mandatory arbitration are just a few of the negative impacts that tort reform will allow. Victims and plaintiffs sue the defendant to be able to receive the full compensation that they deserve. It is hard enough as it is to fight against these major corporations, and tort reform would further exacerbate that. Americans have the right to a fair trial, and the implication of tort reform would take away that constitutionally given right. It is essential that Americans continue to fight against tort reform, as you never know if you may become the next victim.

2019-1-6-1546809813

Chinese suppression of Hong Kong

Would you fight for democracy? Its core principles are the beating heart of our society: providing us with representation, civil rights and freedom — empowering our nation to be just and egalitarian. However, whilst we cherish our flourishing democracy, we have blatantly ignored one of the most portentous democratic crises of our time. The protests in Hong Kong. Sparked by a proposed bill allowing extradition to mainland China, the protests have ignited the city’s desire for freedom, democracy and autonomy; and they have blazed into a broad pro-democracy movement, opposing Beijing’s callous and covert campaign to suppress legal rights in Hong Kong. But the spontaneity fueling these protests is fizzling out, as minor concessions fracture the leaderless movement. Without external assistance, this revolutionary campaign could come to nothing. Now, we, the West, must support protesters to fulfill our legal and moral obligations, and to safeguard other societies from the oppression Hong Kongers are suffering. The Chinese suppression of Hong Kong must be stopped.

Of all China’s crimes, its flagrant disregard for Hong Kong’s constitution is the most alarming. When Hong Kong was returned to China in 1997, the British and Chinese governments signed the Sino-Brititish Joint Declaration, allowing Hong Kong “a high degree of autonomy, except in foreign and defence affairs” until 2047. This is allegedly achieved through the “one country, two systems” model, currently implemented in Hong Kong. Nevertheless, the Chinese government — especially since Xi Jinpin seized power in 2013 — is relentlessly continuing to erode legal rights in our former colony. For instance, in 2016, four pro-democracy lawmakers — despite being democratically elected — were disqualified from office. Amid the controversy surrounding the ruling lurked Beijing, using its invisible hand to crush the opposition posed by the lawmakers. However, it is China’s perversion of Hong Kong’s constitution, the Basic Law, that has the most pronounced and crippling effect upon the city. The Basic Law requires Hong Kong’s leader to be chosen “by universal suffrage upon nomination by a broadly representative nominating committee”; but this is strikingly disparate to reality. Less than seven percent of the electoral register are allowed to vote for representatives in the Election Committee — who actually choose Hong Kong’s leader — and no elections are held for vast swathes of seats, which are thus dominated by pro-Beijing officials. Is this really “universal suffrage”? Or a “broadly representative” committee? This “pseudo-democracy” is unquestionably a blatant violation of our agreement with China. If we continue to ignore the subversion of the fundamental constitution holding Hong Kong together, China’s grasp over a supposedly “autonomous” city will only strengthen. It is our legal duty to hold Beijing to account for these heinous contraventions of both Hong Kong’s constitution and the Joint Declaration — which China purports to uphold. Such despicable and brazen actions, whatever the pretence, cannot be allowed to continue.

The encroachment of their fundamental human rights is yet another travesty. Over the past few years, the Chinese government has been furtively extending its control over Hong Kong. Once, Hong Kongers enjoyed numerous freedoms and rights; now, they silently suffer. Beijing has an increasingly pervasive presence in Hong Kong, and, emboldened by a lack of opposition, it is beginning to repress anti-chinese views. For example, five booksellers, associated with one Hong Kong publishing house, disappeared in late 2015. The reason? The publishing house was printing a book — which is legal in Hong Kong — regarding the love-life of the Chinese president Xi Jinpin. None of the five men were guilty; all five men later appeared in custody in mainland China. One man even confessed on state television, obviously under duress, to an obscure crime he “committed” over a decade ago. This has cast a climate of paranoia over the city, which is already forcing artists to self-censor for fear of Chinese retaliation; if left unchecked, this erosion of free speech and expression will only worsen. Hong Kongers now live with uncertainty as to whether their views are “right” or “wrong”; is this morally acceptable to us? Such obvious infringements of rights to free speech are clear contraventions of the core human rights of people in Hong Kong. Furthermore, this crisis has escalated with the protests, entangling violence in the political confrontations. Police have indiscriminately used force to suppress both peaceful and violent protesters, with Amnesty International reporting “Hongkongers’ human rights situation has violations on almost every front”. The Chinese government is certainly behind the police’s ruthless response to protesters, manipulating its pawns in Hong Kong to quell dissent. This use of force cannot be tolerated; it is a barefaced oppression of a people who simply desire freedom, rights and democracy and it contradicts every principle that our society is founded upon. If we continue abdicating responsibility for holding Beijing to account, who knows how far this crisis will deteriorate? Beijing’s oppression of Hong Kongers’ human rights will not disappear. Britain — as a UN member, former sovereignty of Hong Kong and advocate for human rights — must make a stand with the protesters, who embody the principles of our country in its former colony.

Moreover, if we do not respond to these atrocities, tyrants elsewhere will only be emboldened to further strengthen their regimes. Oligarchs, autocrats and dictators are prevalent in our world today, with millions of people oppressed by totalitarian states. For instance, in India, the Hindu nationalist government, headed by Narendra Modi, unequivocally tyrannize the people of Kashmir: severing connections to the internet, unlawfully detaining thousands of people and reportedly torturing dissidents. The sheer depravity of these atrocities is abhorrent. And the West’s reaction to these barbarities? We have lauded and extolled Modi as, in the words then-president Barack Obama, “India’s reformer in chief”, apathetic to the outrages enacted by his government. This exemplifies our seeming lack of concern for other authoritarian regimes around the world: from our passivity towards the Saudi Arabian royal family’s oppressive oligarchy to our unconcern about the devilish dictatorship of President Erdoğan in Turkey. Our hypocrisy is irrefutable; this needs to change. The struggle in Hong Kong is a critical turning point in our battle against such totalitarian states. If we remain complacent, China will thwart the pro-democracy movement and Beijing will continue to subjugate Hong Kong unabashed. Consequently, tyrants worldwide will be emboldened to tighten their iron fists, furthering the repression of their peoples. But, if we support the protesters, we can institute a true democracy in Hong Kong. Thus, we will set a precedent for future democracies facing such turbulent struggles in totalitarian states, establishing an enduring stance for Western democracies to defend. But to achieve this, we must act decisively and immediately to politically pressure Beijing to make concessions, in order to create a truly autonomous Hong Kong.

Of course, the Chinese government is trying to excuse their actions. They claim to be merely maintaining order in a city of their country, while Western powers fuel protests in Hong Kong. Such fabrications from Chinese spin-doctors are obviously propaganda. There is absolutely no evidence to corroborate their claim of “foreign agents” sparking violence in Hong Kong. And, whilst some protesters are employing aggressive tactics, their actions are justified: peaceful protests in the past, such as the Umbrella Movement of 2014, yielded no meaningful change. Protesters are being forced to violence by Beijing, who are stubborn to propose any meaningful reforms.

Now, we face a decision, one which will have profound and far-reaching repercussions for all of humanity. Do we ignore the egregious crimes of the Chinese government, and in our complacency embolden tyrants worldwide? Or do we fight? Hong Kongers are enduring restricted freedoms, persecution and a perversion of their constitution; we must oppose this oppression resolutely. Is it our duty to support the protesters? Or, is democracy not worth fighting for?

2019-10-11-1570808349

Occurrence and prevalence of zoonoses in urban wildlife: essay help online free

A zoonosis is a disease that can be transmitted from animals to humans. Zoonoses in companion animals are known and described extensively. A lot of research has already been done, Rijks et al (2015) for example lists the 15 diseases of prime public health relevance, economical importance or both (Rijks(1)). Sterneberg-van der Maaten et al (2015) composed a list of the 15 priority zoonotic pathogens, which includes the rabies virus, Echinococcus granulosus, Toxocara canis/cati and Bartonella henselae (Sterneberg-van der Maaten(2)).

Although the research is extensive the knowledge about zoonoses and hygiene instruction of owners, health professionals and other related professions, like pet shop employees, is low. According to Van Dam et al (2016) (3)77% of the pet shop employees does not know what a zoonosis is and just 40% of the pet shops has a protocol for hygiene and disease prevention. 27% of the pet shops and asylums give instruction to their clients about zoonoses. It may therefore be assumed that the majority of the public is unaware of the health risks involving companion animals like cats and dogs. Veterinarians give information about responsible pet ownership and the risks when the pet owner visits the clinic (Van dam(3), Overgaauw (4)). In other words, dissemination obtained from research has not occurred effectively.

However, urban areas are not only populated with domestic animals. There is also a variety of non- domesticated animals living in close vicinity of domesticated animals and the human population, the so-called the urban wildlife. Urban wildlife is defined as any animal that has not been domesticated or tamed and lives or thrives in an urban environment (freedictionary(5)). Just like companion animals, urban wildlife carries pathogen that are zoonotic, for example Echinococcus multilocularis. This is a parasite that can be transmitted from foxes to humans. Another example is the rabies virus, which is transmitted by hedgehogs and bats. Some zoonotic diseases can be transmitted to humans from different animals. Q-fever occurs in mice, foxes, rabbits and sometimes even in companion animals.

There is little knowledge about the risk factors that influence the transmission of zoonoses in urban areas (Mackenstedt(6)). This is mostly due to the lack of active surveillance of carrier animals. This surveillance requires fieldwork, which is expensive and time-consuming. Often there is no immediate result for public-health authorities. This is why surveillance often is initiated during or after an epidemic (Heyman(7)). Meredith et al (2015) mentioned that due to the unavailability of a reliable serological test, for many species it is not yet know what the contribution is to the transmission to human (Meredith(8)).

The general public living in urban areas is largely unaware of the diseases transmitted from the urban wildlife that is present in their living area (Himsworth(9)), (Heyman(7)), (Dobay(10)), (Meredith(8)). Since all these diseases can also be a risk for the public health and the public may need to be informed of these risks.

The aim of this study is to determine the occurrence and prevalence of zoonoses in urban wildlife. To do this, the ecological structure of an European city will be investigated first, to determine wildlife living in the urban areas. Secondly, an overview of the most common and important zoonoses in companion animals will be discussed. Followed by zoonoses in urban wildlife.

2. Literature review

2.1 Ecological structure of the city

Humans and animals live closely together in cities. Both companion animals and urban wildlife share the environment with humans. Companion animals are important to human society. They perform working roles (dogs for hearing of visually impaired people) and they play a role in human health and childhood development (Day(11)).

A distinction can be made between animals that live in the inner city and animals that live in the outskirts of the city. The animals that live in the majority of the European inner cities are: brown rats, house mice, bats, rabbits and different species of birds. Those living outside of the stone inner city are other species of mice, hedgehogs, foxes and moles (Auke Brouwer(12)). In order to create safe passage for this particular group of animals, ecological structures are created. The structure also includes wet passageways for amphibia and snakes and dry passageways like underground tunnels, special bridges and cattle grids (Spier M(13)).

A disadvantage of human and animals living in close vicinity of each other is the possibility of transmitting diseases (Auke Brouwer(12)). Diseases can be transmitted from animals to humans in different ways. A few examples are: through eating infected food, inhalation of aerosols, via vectors or fecal-oral contact (WUR(14)). The most relevant ways of transmission for this review are: indirect physical contact (e.g. contact with contaminated surface), direct physical contact (touching an infected person or animal), through skin lesions, fecal-oral transmission and airborne transmission (aerosols). In the following section an overview of significant zoonoses of companion animals will be described. This information will enable a comparison with urban wildlife zoonoses later in this review.

2.2 Zoonoses of cats and dogs

There are many animals living in European cities. Both companion animals and urban wildlife. 55- 59% of the Dutch households has one or more companion animals (van Dam(3)). This includes approximately 2 million dogs and 3 million cats (RIVM(15)). In all of Europe live approximately 61 million dogs and 66 million cats. Owning a pet has many advantages, but companion animals are also able to transmit diseases to humans (Day(11)). In the following section significant zoonoses for companion animals will be described.

A. Bartonellosis (cat scratch disease)

Bartonellosis is an infection by Bartonella henselae or B. clarridgeiae. Most infections in cats are thought to be subclinical. If disease does occur, the symptoms are mild and self-limiting, characterized by lethargy, fever, gingivitis, uveitis and nonspecific neurological signs (Weese JS(16)). The seroprevalence in cats is 81% (barmettler(17)).

Humans get infected by scratches or bites and sometimes by infected fleas and ticks. In the vast majority of cases, the infection is also mild and self-limiting. The clinical signs in humans include development of a papule at the site of inoculation, followed by regional lymphadenopathy and mild fever, generalized myalgia and malaise. This usually resolves spontaneously over a period of weeks to months (Weese JS(16)).

Few cases of human bartonella occur in The Netherlands. Based on laboratory diagnosis done by the RIVM, the bacteria causes 2 cases per 100.000 humans each year. However, this could be ten times higher, since the disease is mild and self-limiting most of the time, so most people do not visit a health care professional (RIVM(18)).

B. Leptospirosis

This disease is caused by the bacteria Leptospira interrogans. According to Weese et al (2002) leptospirosis is the most widespread zoonotic disease in the world. The bacteria can infect a wide range of animals (Weese(16)).

Leptospirosis is in dogs and cats a relatively small zoonosis. It is not know exactly how many dogs are infected annually subclinically or asymptomatically, but according to Houwers et al (2009), each year around 10 cases occur in The Netherlands (Houwers(19)). RIVM states that each year 0,2 cases per 100.000 humans occur (RIVM(20)).

Infection in dogs is called Weill’s disease. Clinical signs can be peracute, acute, subacute and chronic. A peracute infection usually results in in sudden death with few clinical signs. Dogs with an acute infection are icteric, have diarrhea, vomit and may experience peripheral vascular collapse. The subacute form is generally manifested as fever, vomiting, anorexia, polydipsia, dehydration and in some cases severe renal disease can develop. Symptoms of a chronical infections are: fever of unknown origin, unexplained renal failure, or hepatic disease and anterior uveitis. The majority of infections in dogs are subclinical or chronic. In cats clinical disease is infrequent (Weese(16)).

According to Barmettler et al (2011), the risk of transmission of Leptospira from dogs to humans is just theoretical. All tested humans were exposed to infected dogs, but all were seronegative to the bacteria (Barmettler(17)).

The same bacteria that causes leptospirosis in dogs is responsible for the disease in rats, namely Leptospira interrogans. This bacteria is considered the most widespread zoonotic pathogen in the world and rats are the most common source of human infection, especially in urban areas (Himsworth(21)). According to the author, the bacteria asymptomatically colonizes the rat kidney and the rats shed the bacteria via the urine (Himsworth(9)). Bacteria can survive outside the rats for some time, especially in a warm and humid environment (RIVM(20)).

People become infected through contact with urine, or through contact with contaminated soil or water (Himsworth (21)). The Leptospira-bacteria can enter the body via the mucous or open wounds (Oomen(22)). The symptoms and severity of disease can be highly variable, ranging from asymptomatic to sepsis and death. Common complaints are: headache, nausea, myalgia and vomiting. Moreover, neurologic, cardiac, respiratory, ocular and gastrointestinal manifestations can occur (Weese JS(16)).

The prevalence in rats differs between cities and even between locations in the same city. Himsworth (2013) states that in Vancouver 11% of the tested rats was positive for Leptospira (Himsworth(9)). Another study by Easterbrook (2007) found 65,3% of all tested rats in Baltimore to be positive for the bacteria (Easterbrook(23)). Krojgaard (2009) found a prevalence between 48% and 89% in different location in Copenhagen (Krojgaard(24)).

C. Dermatophytosis (ringworm)

Dermatophytosis is a fungal dermatologic disease, caused by Microsporum spp. or Trichophyton spp. It causes disease in a variety of animals (Weese(16)). According to Kraemer (2012), the dermatophytes that occur in rabbits are Trichophyton mentagrophytes and Microsporum canis. Although the former is more common(Kraemer(25)).

Dermatophytes live in keratin layers of the skin and cause ringworm. They depend on human or animal infection for survival. Infection occurs through direct contact between dermatophyte arthrospores and keratinocytes/hairs. Transmission through indirect contact also occurs, for example through toiletries, furniture or clothes (Donnelly(26), RIVM(18)). Animals (especially cats) can transmit M. canis infection while remaining asymptomatic (Weese JS(16)).

The symptoms in both animals and humans can vary from mild or subclinical to severe lesions similar to pemphigus foliaceus (itching, alopecia and blistering). The skin lesions develop 1-3 weeks after infection(Weese JS). Healthy, intact skin cannot be infected, but only mild damage is required to make the skin susceptible to infection. No living tissue is invaded, only the keratinized stratum corneum is colonized. However, the fungus does induce an allergic and inflammatory eczematous response in the host (Donelly(26), RIVM(18)).

Dermatophytosis is not commonly occurring in humans. RIVM states that each year, 3000 per 100.000 humans get infected. Children between the age of 4 and 7 are the most susceptible to the fungal infection. In cats and dogs, the prevalence of M. canis is much higher: 23,3% according to Seebacher(27). The prevalence in rabbits is 3.3% (d’Ovidio(28)).

D. Echinococcosis

Echinococcus granulosus can be transmitted from dogs to humans. Dogs are the definitive hosts, while herbivores or humans are the intermediate hosts. Dogs can become infected by eating infected organs, for example from sheep, pigs and cattle (RIVM(29)) . The intermediate hosts develop a hydatid cyst with protoscoleces after ingesting eggs produced and excreted by definitive hosts. The protoscoleces evaginate in the small intestine and attach there(MacPherson(30)).

In most parts of Europe, Echinococcus granulosus occurs occasionally. However, in Spain, Italy, Greece, Romania and Bulgaria the bacteria is highly endemic.

Animals, either as definitive or as intermediate hosts, rarely show symptoms.

Humans, on the other hand, can show symptoms, depending on the size and site of the cyst and the growth rate. The disease can become life-threatening if a cyst in lungs or liver bursts. In that case a possible complication is an anaphylactic shock (RIVM(29)).

In the Netherlands, echinoccosis rarely occurs in humans. Between 1978 and 1991, 191 new patients were diagnosed, but it is not known how many of these were new cases. The risk of infection is higher in the case of bad hygiene and living closely together with dogs (RIVM(29)). In a study done by Fotiou et al (2012) the prevalence of Echinococcus granulosus is 1,1% (Fotiou(31)). The prevalence in dogs is much higher: 10,6% according to Barmettler et al (17).

E. Toxocariasis

Toxocariasis is caused by Toxocara canis or Toxocara cati. Toxocara is present in the intestine of 32% of all tested dogs, 39% of tested cats and 16%-26% of tested red foxes (Luty(32), LETKOVÁ(33)). In dogs younger than 6 weeks the prevalence can be up to 80% (Kantere) and in kittens of 4-6 months old it can be 64% (Luty(32)). The host becomes infected by swallowing the parasites embryonated eggs (Kantere(34)).

Dogs and red foxes are the definitive host of T. canis, cats of T. cati (Luty(32)). Humans are paratenic hosts. After ingestion, the larvae hatch in the intestine and migrate all over the body via blood vessels (visceral larva migrans). In young animals the migrations occurs via the lungs and trachea. After swallowing, the larvae mature in the intestinal tract.

In paratenic hosts and adult dogs that have some degree of acquired immunity, the larvae undergo somatic migration. There they remain as somatic larvae in the tissues. If dogs eat a Toxocara-infected paratenic host, larvae will be released and develop to adult worms in the intestinal tract (MacPherson(30)).

Humans can be infected by oral ingestion of infective eggs from contaminated soil, from unwashed hands or consumption of raw vegetables (MacPherson(30)).

The clinical symptoms in animals depend on the age of the animal and number, location and stage of development of worms. After birth, puppies can suffer from pneumonia because of tracheal migration and die in 2-3 days. 2-3 weeks after birth, puppies can show emaciation and digestive disturbance because of mature worms in the intestine and stomach. Clinical signs are: diarrhea, constipation, coughing, nasal discharge and vomiting.

Clinical symptoms in adult dogs are rare(MacPherson(30)).

In most human cases following infection by small numbers of larvae, the disease occurs without symptoms. Mostly children do get infected. VLM is mainly diagnosed in children of 1-7 years old. The symptoms can be general malaise, fever, abdominal complaints, wheezing or coughing. Severe clinical symptoms are mainly found in children of 1-3 years old.

Most of the larvae seem to be distributed to the brain and can cause neurological disease. Larvae do not migrate continuously. They rest periodically, and during such periods they induce an immunologically mediated inflammatory response (MacPherson(30)).

The prevalence in children is much lower than in adults, respectively 7% and 20%. The risk of infection with Toxocara spp. increases with bad hygiene (Overgaauw(36)). In the external environment, the eggs survive for months and consequently toxocariasis represents a significant public health risk (Kantere(34)) . High rates of soil contamination with toxocara eggs are demonstrated in parks, playgrounds, sandpits and other public places. Direct contact with infected dogs is not considered as a potential risk for human infection, because embryonation to the stage of infectivity requires a minimum of 3 weeks (MacPherson(30)).

F. Toxoplasmosis

Toxoplasmosis is caused by the protozoa Toxoplasma gondii. Cats are the definitive hosts and other animals and humans act as intermediate hosts. Infected cats excrete oocysts in the feces. These oocysts end up in the environment, where they are ingested by intermediate hosts (direct or indirect via food or water). In the intermediate hosts the protozoa migrates until it gets stuck. It is then encapsulated and stays at that place. If cats eat infected intermediate hosts they become infected.

Animals rarely show symptoms, although some young cats get diarrhea, encephalitis, hepatitis and pneumonia.

In most humans, infection is asymptomatic. Pregnant women can transmit the protozoa through the placenta and infect the unborn child. The symptoms in the child depend on the stage of pregnancy. An infection in early stages leads to severe deviations and in many cases to abortion. If the infection occurs in a later stage, premature birth is seen and symptoms of an infectious disease (fever, rash, icterus, anemia and an enlarged spleen or liver). Although, in most cases the symptoms start after birth. Most damage is done in the eyes (RIVM(37)).

Based on data of the RIVM and Overgaauw (1996) the disease that is most commonly transmitted to humans is toxoplasmosis. The prevalence was 40,5% in 1996. This number is reduced in the last few decades and Jones (2009) states that in 2009 the prevalence was 24,6% (Jones(38)). The prevalence rises with age, being 17,5% in humans younger than 20 years, and 70% in humans of 65 years and older. There is no increased risk of getting an infection if humans have a cat as a pet (RIVM(37)). Birgisdottir et al (2006) studied the prevalence in cats in Sweden, Estonia and Iceland. They found a prevalence of 54,9% , 23% and 9,8%, respectively in Estonia, Sweden and Iceland (Birgisdottir(39)).

G. Q-fever

The aetiological agent of Q-fever is the bacteria Coxiella burnetti. The bacteria has a very wide host range, including ruminants, birds and mammals such as small rodents, dogs, cats and horses. Accordingly, there is a complex reservoir system (Meredith(8)).

The extracellular form of the bacteria is very resistant, therefore it can be persistent in the environment for several weeks. It can also be spread by the wind, so direct contact with animals is not required for infection. Coxiella burnetti is found in both humans and animals in the blood, lungs, spleen, liver and during pregnancy in large quantities in the placenta and mammary glands. It is shed in urine and feces and during pregnancy in the milk (Meredith(8)).

Humans that live close to animals (like in the city) have a higher risk to get infected, since the mode of transmission is aerogenic or direct contact. The bacteria is excreted through the urine feces, placenta or amnionic fluid. After drying, it is aerogenically spread (RIVM(40)). Acute infection is characterized by atypical pneumonia and hepatitis and in some cases transient bacteraemia. The bacteria then haematogenously spreads, which results in an infection in the liver, spleen, bone marrow, reproductive tract and other organs. This is followed by the formation of granulomatous lesions in the liver and bone marrow and development of an endocarditis involving the aortic and mitral valve (Woldehiwet(41)).

On the other hand, there is little information about the clinical signs of Q fever in animals, but variable degrees of granulomatous hepatitis, pneumonia, or bronchopneumonia have been reported in mice (Woldehiwet(41)). In pregnant animals, abortion or low foetal birth weight can occur (Meredith(8), Woldehiwet(41)).

The prevalence in the overall human population in Europe is not high (2,7 %), but in risk groups like veterinarians, the prevalence can be as high as 83% (RIVM(40)).

Meredith et al, have developed a modified indirect ELISA kit adapted for use in multiple species. They tested the prevalence of C. burnetii in wild rodents (band vole, field vole and wood mouse), red foxes and domestic cats in the United Kingdom. The prevalence in the rodents was overall 17,3%. In cats it was 61.5% and in foxes 41,2% (Meredith(8)). In rabbits, the prevalence was 32,3% (González-Barrio(42)).

H. Pasteurellosis

Pasteurellosis is caused by Pasteurella multocida. This is a coccobacillus found in the oral, nasal and respiratory cavities of many species of animals (dog, cats, rabbits, etc). It is one of the most prevalent commensal and opportunistic pathogens in domestic and wild animals (Wilson(43), Giordano(44)). Human infections are associated with animal exposure, usually after animal bites or scratches (Giordano(44)). Kissing or licking of skin abrasions or mucosal surfaces of animals can also lead to infection. Transmission between animals is through direct contact with nasal secretions. (Wilson(43)).

In both animals and humans Pasteurella multocida causes chronic or acute infections that can lead to significant morbidity with symptoms of pneumonia, atrophic rhinitis, cellulitis, abscesses, dermonecrosis, meningitis and/or hemorrhagic septicaemia. In animals the mortality is significant, but not in human. This is probably due to the immediate prophylactic treatment of animal bite wounds with antibiotics. (Wilson(43))

Disease in animals appears as a chronic infection in nasal cavity, paranasal sinuses, middle ears, lacrimal and thoracic ducts of the lymph system and lungs. Primary infections with respiratory viruses or Mycoplasma species predisposes to a Pasteurella infection (Wilson(43)).

The incidence in humans is 0,19 cases per 100.000 humans (Nseir(45)). The prevalence in dogs and cats is 25-42% (Mohan(46)). The only known prevalence in rabbits is a 29,8% in laboratory animal facilities (Kawamoto(47)).

The majority of the human population lives in cities. As a result of this, in some countries the urban landscape encompasses more than half of the land surface. This leaves little space for the wildlife species living in the country. Some species are nowadays found more in urban areas than in their native environment. They have adapted to the urban ecosystems. This is a positive aspect for biodiversity in the cities. On the other hand, just like companion animals, this urban wildlife can transmit disease to humans (Dearborn(49)). In the following section, significant zoonoses of urban wildlife will be described.

A. Zoonoses of rats

The following zoonoses occur urban rats: Leptospirosis (see 2.2B) and rat bite fever.

Rat bite fever

The rat bite fever is caused by Streptobacillus moniliformis or S. minis(Chafe(50)). These bacteria are part of the normal oropharyncheal flora of the rat and it is thought to be present in rat populations worldwide.

Since the bacteria are part of the normal flora, the rats are not susceptible to the bacteria. In people, on the other hand, the bacteria can cause rat bite fever. The transmission occurs through the bite of an infected rat and through ingestion of contaminated food. The latter causes Haverhill fever.

The clinical symptoms are fever, chills, headache, vomiting, polyarthritis and skin rash. In Haverhill fever pharyngitis and vomiting may be more pronounced. If not treated, S. moniliformis infection can progress to septicemia with a mortality rate of 7-13% (Himsworth(21)).

The prevalence of Streptobacillus spp. in rats is 25% (Gaastra(51)). According to Trucksis et al (2016), rat bite fever is very rare in humans. Only a few cases each year occur (Trucksis(52)).

B. Zoonoses of mice

The zoonotical diseases that occur in mice are: hanta viruses, lymphocytic choriomeningitis, tularemia and Q-fever (see 2.2 G).

Hanta viruses

There are different types of hanta viruses, each carried by a specific rodent host species. In Europe, three types occur: Puumala virus(PUUV), carried by bank vole; Dobrava virus(DOBV), carried by yellow-necked mouse; Saaremaa virus(SAAV), carried by the striped field mouse (Heyman(7)). SAAV has been found in Estonia, Russia, South-Eastern Finland, Germany, Denmark, Slovenia and Slovakia. PUUV is very common in Finland, Northern Sweden, Estonia, the Ardennes Forest Region, parts of Germany, Slovenia and in parts of European Russia. DOBV has been found in The Balkans, Russia, Germany, Estonia and Slovakia (Heyman(7)).

Hantaviruses are transmitted via direct and indirect contact. Infective particles fare secreted in feces, urine and saliva (Kallio(53)).

The disease is asymptomatic in mice (Himsworth(21)). Humans on the other hand do get symptoms. All types of the Hanta virus cause hemorrhagic fever with renal syndrome (HFRS), but they differ in severity. HFRS is characterized by acute onset, fever, headache, abdominal pains, backache, temporary renal insufficiency and thrombocytopenia. In DOBV the extent of hemorrhages, requirement for dialysis treatment, hypotension and case-fatality rates are much higher than in PUUV or SAAV. Mortality is very low (approximately 0.1%)(Heyman(7)).

Hanta viruses are an endemic zoonosis in Europe. Tens of thousands of people get infected each year (Heyman(7)). The prevalence in mice is 9,5% (Sadkowska(54)).

Lymphocytic choriomeningitis

Lymphocytic choriomeningitis is a viral disease, caused by an arena virus (Cahfe(50)). The natural reservoirs of arenaviruses are rodent species. They are asymptomatically infected (Oldstone(55)).

In humans the disease is characterized by varying signs, from inapparent infection to the acute, fatal meningoencephalitis. The transmission of the disease is through mice bites and material contaminated with excretions and secretions of infected mice (Cahfe(50)).

The virus causes little or no toxicity to the infected cells. The disease- and associated cell and tissue injury- are caused mostly by activity of the hosts immune system. The antiviral response produces factors that act against the infected cells and damage them. Another factor is the displacement of cellular molecules that are normally attached to cellular receptors by viral proteins. This could result in conformational changes, which causes the cell membrane to become fragile and interfere with normal signalling events (Oldstone(55)).

The prevalence of lymphocytic choriomeningitis in human is 1,1 %(Lledó(56). In mice, the prevalence is 2,4% (Forbes(57)).

Tularemia

Tularemia is caused by the bacterium Franscisella tularensis. Only few animal outbreaks have been reported and so far only one outbreak in wildlife has been closely monitored(Dobay(10)). The bacteria can infect a large number of animal species. Outbreaks among mammals and human are rare. However, outbreaks can occur when the source of infection is widely spread and/or many people or animals are exposed. Outbreaks are difficult to monitor and trace, because mostly wild rodents and lagomorphs are affected (Dobay(10)).

People get infected in five ways: ingestion, direct contact with a contaminated source, inhalation, arthropod intermediates and animal bites. In animals the route of transmission is not yet known. The research of Dobay et al(2015) suggests that tularemia can cause sever outbreaks in small rodents such as house mice. The outbreak is self-exhausting in approximately three months, so no treatment is needed (Dobay(10)).

Tularemia is a potentially lethal disease. There are different clinical manifestations, depending on the route of infection. The ulceroglandular form is the most common and occurs after handling contaminated sources. The oropharyngeal form can be caused by ingestion of contaminated food or water. The pulmonary, typhoidal, glandular and ocular forms occur less frequently (Dobay(10)), Anda(58)).

In humans the symptoms of the glandular and ulceroglandular form are cervical, occipital, axillary or inguinal lymphadenopathy. The symptoms of pneumonic tularemia are fever, cough and shortness of breath (Weber(59)). Clinical manifestation of the oropharyngeal form include adenopathies on the elbow/ armpit/both, cutaneous lesions, fever, malaise, chills and shivering, painful sore throat with swollen tonsils and enlarged cervical lymph nodes (Sahn(60), Anda(58)).

The clinical features in animals are unspecific and the pathological effects vary substantially between different animal species and geographical locations. The disease can be very acute (for example in highly susceptible species like mice), with development of sepsis, liver and spleen enlargement and pinpoint white foci in the affected organs. The subacute form can be found in moderately susceptible species like hares. The symptoms are granulomatous lesions in lungs, pericardium and kidneys.

Infected animals are usually easy to catch, moribund or even dead (Maurin(61)).

Rossow et al (2015) states that the prevalence in humans is 2% (Rossow(62)). Highest prevalence found in small mammals during outbreak in Central Europe is 3,9% (Gurycová(63)).

C. Zoonoses of foxes

The zoonosis that can be transmitted from foxes to human are Q-fever (see 2.2G), toxocariasis (see 2.2E) and echinococcus multilocularis.

Echinococcus multilocularis

This is considered one of the most serious parasitic zoonosis in Europe. The red foxes are the main definitive hosts. The natural intermediate host are voles, but a lot of animals can act as accidental hosts, for example monkeys, human, pigs and dogs. The larval stage of Echinococcus multilocularis causes Alveolar echinococcosis (AE). The infection is widely distributed in foxes, with a prevalence of 70% in some areas. RIVM states that the prevalence in The Netherlands is 10-13%. The prevalence in humans differs throughout Europe, and has to do with the prevalence in foxes. If the prevalence in foxes is high, the prevalence in human increases. However, there has not been reported a prevalence higher than 0,81 per 100.000 inhabitants (RIVM(29)). Foxes living in urban areas pose a threat to the public health and there is concern that that risk may rise due to the suspected geographical spread of the parasite (Conraths(64)).

In foxes the helminth colonizes the intestines, but it does not cause disease. In intermediate hosts and accidental hosts cysts are formed after oral intake of eggs excreted by foxes, which causes AE. The size, site and growth rate of the larval stage determine the symptoms. Most of the time, infection starts in the liver, causing local deviations. The larvae grow invasively to other organs and blood vessels. It can take five to fifteen years before clear symptoms show (RIVM(29)). In human AE is a very rare disease, but incidences have increased in recent years.

D. Zoonoses of rabbits

The zoonoses that can be transmitted from rabbits to human are: Pasteurellosis (see 2.2H), tularemia (see 2.3B), Q fever (see 2.2G), dermatophytosis (see 2.2C) and cryptosporidiosis.

Cryptosporidiosis

Cryptosporidium is a protozoa. It is considered the most important zoonotic pathogen causing diarrhea in humans and animals. In rabbits, Cryptosporidium cuniculus (rabbit genotype) is the most common genotype (Zhang(65)). Two large studies have been done in rabbits, they showed a prevalence between 0,0% and 0,9% in rabbits (Robinson(66)).

The risks of cryptosporidiosis for the public health from wildlife are poorly understood. No studies of the host range and biological features of the Cryptosporidium rabbit genotype were identified. However human-infectious Cryptosporidium (including Cryptosporidium parvum) have caused experimental infections in rabbits and there is some evidence that his occurs naturally (Robinson(66)).

In human and neonatal animals, the pathogen causes gastroenteritis, chronic diarrhea or even severe diarrhea (Zhang(65), Robinson(66)). In >98% of these cases, the disease is caused by C. hominis or C. parvum, but recently, the rabbit genotype has emerged as a human pathogen. Little is known yet about this genotype, because only a few cases in humans were reported (Robinson(66)). Since little isolates have been found in humans and little is known about human infection with Cryptosporidium rabbit genotype, Robinson et al (2008) assumed this genotype is insignificant to public health and further investigation is needed (Robinson(67)).

E. Zoonoses of hedgehogs

Hedgehogs pose a risk for a number of potential zoonotic disease, for example microbial infections like Salmonella spp, Yersinia pseudotuberculosis, Mycobacterium marinum and dermatophytosis.

Salmonellosis

Salmonellosis is the most important zoonotic disease in hedgehogs. The prevalence of Salmonella in hedgehogs is 18,9%. The infection can either be asymptomatic or symptomatic. The hedgehogs that do show symptoms can display anorexia, diarrhea and weight loss. Humans get infected through ingestion of the bacteria, after handling the hedgehog or contact with feces (Riley(68)).

The Salmonella serotypes that are associated with hedgehogs are S. tilene and S. typhimurium (Woodward(69), Riley(68)).

Clinical manifestations in human (mainly adults) of both serotypes involve self-limiting gastroenteritis (including headache, malaise, nausea, fever, vomiting, abdominal pain and diarrhea (Woodward(69))), but bacteriamia, localized and endovascular infections may also occur (Crum Cianflone(70)). Infection with S. typhimurium and S. tilene is rare in humans, approximately 0,057 per 100.000 inhabitants (CDC(71))

Yersinia pseudotuberculosis.

No clinical symptoms for Yersinia pseudotuberculosis infection in hedgehogs are described in the literature. However, this bacteria causes a gastroenteritis in humans, characterized by a self-limiting mesenteric lymphadenitis, which mimics appendicitis. Complications can occur, which include erythema nodosum and reactive arthritis (Riley(68)). Since only Riley et al (2005) reported a case concerning Y. pseudotuberculosis, no information in available yet about the prevalence in hedgehogs or humans, or about the route of transmission. Although Riley et al (2005) claim that the zoonosis in commonly occurring (Riley(68)).

Myobacterium marinum

Mycobacterium marinum infection is not common in hedgehogs. The bacteria causes systemic myocbacteriosis. The porte d’entrée of the bacteria is through a wound or abrasion in the skin and the bacteria spreads systemically through the lymphatic system. This is also the way in which hedgehogs transmit the bacteria to human; the spines of the hedgehog can cause wounds and the bacteria can enter. Symptoms in human consist of clusters of papules or superficial nodules and can be painful. (Riley(68)). No information is reported regarding the prevalence of the bacteria in hedgehogs or humans.

Dermatophytosis

Dermatophytosis has been seen in hedgehogs. The most isolated dermatophyte is Trichophyton mentagrophytes var. erinacei. Microsporum spp. have also been reported. Lesions in the hedgehog are similar to those in other species: nonpruritic , dry, scaly skin with bald patches and spine loss. Hedgehogs can also be asymptomatic carriers, and that is a risk for potential zoonotic transmission (Riley(68)).

In human, Trichophyton mentagrophytes var. erinacei causes a local rash with pustules at the edges and an intensely irritating and thickened area in centre of the lesion. This usually resolves spontaneously after 2-3 weeks (Riley(68)).

Few cases of Trichophyton mentagrophytes var. erinacei have been reported (Pierard-Franchimont(72), Schauder(73), Keymer(74)), but no prevalence is known for humans and hedgehogs.

F. Zoonoses of bats

According to Calisher et al (2009) bat viruses that are proven to cause highly pathogenic disease in human are rabies virus and related lyssaviruses, Nipah and Hendra viruses, and SARS-CoV-like virus (Calisher(75)). Only the former is relevant for this review, since Nipah and Hendra do not occur in Europe (Munir(76)) and SARS is not directly transmitted to human (Hu(77)).

Rabies virus and related lyssaviruses

The rabies virus is present in the saliva of infected animals. Accordingly, the virus is transmitted from mammals to human through a bite (Calisher(75)).

Symptoms are equal in animals and humans. The disease starts with a prodromal stage. Symptoms are non-specific, and consist of fever, itching and pain near the site of the bite wound.

Subsequently follows the furious stage. Clinical features are hydrophobia (violent inspiratory muscle spasms, hyperextension and anxiety after attempts to drink), hallucinations, fear, aggression, cardiac tachyarrhythmias, paralysis and coma.

The final stage is the paralytic stage. It is characterized by ascending paralysis and loss of tendon reflexes, sphincter dysfunction, bulbar/respiratory paralysis, sensory symptoms, fever, sweating, gooseflesh and fasciculation.

Untreaded, the disease is fatal in approximately five days after showing the first symptoms (Warrell(78)).

Lyssaviruses from bats are related to the rabies virus. There are seven lyssavirus genotypes. Some of these cause disease in human, similar to rabies. Others, on the other hand, do not cause disease. Although it is still unclear, transmission is thought to be through bites (Calisher(75)).

Since 1977 4 cases of human rabies coming from a bat bite have been reported in The Netherlands. In bats living there, the prevalence is 7% (RIVM).

2016-3-12-1457784290

Sickle-cell conditions

NORMAL HEMOGLOBIN STRUCTURE:

Hemoglobin is present in erythrocytes and is important for normal oxygen delivery to tissues. Hemoglobinopathies are disorders affecting the structure, function or production of hemoglobin.

Different hemoglobins are produced during embryonic, fetal and adult life. Each consists of a tetramer of globin polypeptide chains: a pair of ”-like chains 141 amino acids long and a pair of ”-like chains 146 amino acids long. The major adult hemoglobin, HbA has the structure ”2”2. HbF (”2”2) predominates during most of gestation and HbA2 (”2”2) is the minor adult hemoglobin.

Each globin chain surrounds a single heme moiety, consisting of a protoporphyrin IX ring complexed with a single iron atom in the ferrous state (Fe2+). Each heme moiety can bind a single oxygen molecule; a molecule of hemoglobin can transport up to four oxygen molecules as each hemoglobin contains four heme moieties.

The amino acid sequences of various globins are highly homologous to one another and each has a highly helical secondary structure. Their globular tertiary structures cause the exterior surfaces to be rich in polar (hydrophilic) amino acids that enhance solubility and the interior to be lined with nonpolar groups, forming a hydrophobic pocket into which heme is inserted Numerous tight interactions (i.e.,”1”1 contacts) hold the ” and ” chains together. The complete tetramer is held together by interfaces (i.e., ”1”2 contacts) between the ”-like chain of one dimer and the non-” chain of the other dimer. The hemoglobin tetramer is highly soluble, but individual globin chains are insoluble. (Unpaired globin precipitates, forming inclusions that damage the cell and can trigger apoptosis. Normal globin chain synthesis is balanced so that each newly synthesized ” or non-” globin chain will have an available partner with which to pair.)

FUNCTION OF HEMOGLOBIN:

Solubility and reversible oxygen binding are the two important functions which were deranged in hemoglobinopathies. Both depend mostly on the hydrophilic surface amino acids, the hydrophobic amino acids lining the heme pocket, a key histidine in the F helix and the amino acids forming the ”1”1 and ”1”2 contact points. Mutations in these strategic regions alter oxygen affinity or solubility.

Principal function of Hb is to transport oxygen and delivery to tissue which is represented most appropriately by oxygen dissociation curve (ODC).

Fig: The well-known sigmoid shape of the oxygen dissociation curve (ODC), which reflects the allosteric properties of haemoglobin.

Hemoglobin binds with O2 efficiently at the partial pressure of oxygen (Po2) of the alveolus, retains it in the circulation and releases it to tissues at the Po2 of tissue capillary beds. The shape of the curve is due to co-operativity between the four haem molecules. When one takes up oxygen, the affinity for oxygen of the remaining haems of the tetramer increases dramatically. This is because haemoglobin can exist in two configurations – deoxy (T) and oxy (R). The T form has a lower affinity than the R form for ligands such as oxygen.

Oxygen affinity is controlled by several factors. The Bohr effect (e.g. oxygen affinity is decreased with increasing CO2 tension) is the ability of hemoglobin to deliver more oxygen to tissues at low Ph. The major small molecule that alters oxygen affinity in humans is 2,3-bisphosphoglycerate (2,3-BPG; formerly 2,3-DPG) which lowers oxygen affinity when bound to hemoglobin. HbA has a reasonably high affinity for 2,3-BPG. HbF does not bind 2,3-BPG, so it tends to have a higher oxygen affinity in vivo. Increased levels of DPG, with an associated decrease in P50 (partial pressure at which haemoglobin is 50 per cent saturated), occur in anaemia, alkalosis, hyperphosphataemia, hypoxic states and in association with a number of red cell enzyme deficiencies.

Thus proper oxygen transport depends on the tetrameric structure of the proteins, the proper arrangement of hydrophilic and hydrophobic amino acids and interaction with protons or 2,3-BPG.

GENETICS OF HEMOGLOBIN:

The human hemoglobins are encoded in two tightly linked gene clusters; the ”-like globin genes are clustered on chromosome 16, and the ”-like genes on chromosome 11. The ”-like cluster consists of two ”-globin genes and a single copy of the ” gene. The non-” gene cluster consists of a single ” gene, the G” and A” fetal globin genes, and the adult ” and ” genes. The ”-like cluster consists of two ”-globin genes and a single copy of the ” gene. The non-” gene cluster consists of a single ” gene, the G” and A” fetal globin genes, and the adult ” and ” genes.

DEVELOPMENTAL BIOLOGY OF HUMAN HEMOGLOBINS:

Red cells first appearing at about 6 weeks after conception contain the embryonic hemoglobins Hb Portland (”2”2), Hb Gower I (”2”2) and Hb Gower II (”2”2). At 10’11 weeks, fetal hemoglobin (HbF; ”2”2) becomes predominant and synthesis of adult hemoglobin (HbA; ”2”2) occurs at about 38 weeks. Fetuses and newborns therefore require ”-globin but not ”-globin for normal gestation. Small amounts of HbF are produced during postnatal life. A few red cell clones called F cells are progeny of a small pool of immature committed erythroid precursors (BFU-e) that retain the ability to produce HbF. Profound erythroid stresses, such as severe hemolytic anemias, bone marrow transplantation, or cancer chemotherapy, cause more of the F-potent BFU-e to be recruited. HbF levels thus tend to rise in some patients with sickle cell anemia or thalassemia. This phenomenon probably explains the ability of hydroxyurea to increase levels of HbF in adult and agents such as butyrate and histone deacetylase inhibitors can also activate fetal globin genes partially after birth.

HEMOGLOBINOPATHIES:

Hemoglobinopathies are disorders affecting the structure, function or production of hemoglobin. These conditions are usually inherited and range in severity from asymptomatic laboratory abnormalities to death in utero. Different forms may present as hemolytic anemia, erythrocytosis, cyanosis or vaso-occlusive stigmata.

Structural hemoglobinopathies occur when mutations alter the amino acid sequence of a globin chain, altering the physiologic properties of the variant hemoglobins and producing the characteristic clinical abnormalities. The most clinically relevant variant hemoglobins polymerize abnormally as in sickle cell anemia or exhibit altered solubility or oxygen-binding affinity.

Thalassemia syndromes arise from mutations that impair production or translation of globin mRNA leading to deficient globin chain biosynthesis. Clinical abnormalities are attributable to the inadequate supply of hemoglobin and imbalances in the production of individual globin chains, leading to premature destruction of erythroblasts and RBC. Thalassemic hemoglobin

variants combine features of thalassemia (e.g., abnormal globin biosynthesis) and of structural hemoglobinopathies (e.g., an abnormal amino acid sequence).

Hereditary persistence of fetal hemoglobin (HPFH) is characterized by synthesis of high levels of fetal hemoglobin in adult life. Acquired hemoglobinopathies include modifications of the hemoglobin molecule by toxins (e.g., acquired methemoglobinemia) and clonal abnormalities of hemoglobin synthesis (e.g., high levels of HbF production in preleukemia and ” thalassemia in myeloproliferative disorders).

There are five major classes of hemoglobinopathies.

Classification of hemoglobinopathies:

CLASS HEMOGLOBINOPATHIES

1 Structural hemoglobinopathies’hemoglobins with altered amino acid sequences that result in deranged function or altered physical or chemical properties

A. Abnormal hemoglobin polymerization’HbS, hemoglobin sickling

B. Altered O2 affinity

1. High affinity’polycythemia

2. Low affinity’cyanosis, pseudoanemia

C. Hemoglobins that oxidize readily

1. Unstable hemoglobins’hemolytic anemia, jaundice

2. M hemoglobins’methemoglobinemia, cyanosis

2 Thalassemias’defective biosynthesis of globin chains

A. ” Thalassemias

B. ” Thalassemias

C. ”, ”, ” Thalassemias

3 Thalassemic hemoglobin variants’structurally abnormal Hb associated with coinherited thalassemic phenotype

A. HbE

B. Hb Constant Spring

C. Hb Lepore

4 Hereditary persistence of fetal hemoglobin’persistence of high levels of HbF into adult life

5 Acquired hemoglobinopathies

A. Methemoglobin due to toxic exposures

B. Sulfhemoglobin due to toxic exposures

C. Carboxyhemoglobin

D. HbH in erythroleukemia

E. Elevated HbF in states of erythroid stress and bone marrow dysplasia

TABLE 127

GENETICS OF SICKLE HEMOGLOBINOPATHY:

This genetic disorder is due to the mutation of a single nucleotide, from a GAG to GTG codon on the coding strand, which is transcribed from the template strand into a GUG codon. Based on genetic code, GAG codon translates to glutamic acid while GUG codon translates to valine amino acid at position 6. This is normally a benign mutation, causing no apparent effects on the secondary, tertiary, or quaternary structures of hemoglobin in conditions of normal oxygen concentration. But under conditions of low oxygen concentration, the deoxy form of hemoglobin exposes a hydrophobic patch on the protein between the E and F helices. The hydrophobic side chain of the valine residue at position 6 of the beta chain in hemoglobin is able to associate with the hydrophobic patch, causing hemoglobin S molecules to aggregate and form fibrous precipitates. It also exhibits changes in solubility and molecular stability.

These properties are responsible for the profound clinical expressions of the sickling syndromes.

HbSS disease or sickle cell anemia (the most common form) – Homozygote for the S globin with usually a severe or moderately severe phenotype and with the shortest survival
HbS/”0 thalassemia – Double heterozygote for HbS and b-0 thalassemia; clinically indistinguishable from sickle cell anemia (SCA)
HbS/”+ thalassemia – Mild-to-moderate severity with variability in different ethnicities
HbSC disease – Double heterozygote for HbS and HbC characterized by moderate clinical severity
HbS/hereditary persistence of fetal Hb (S/HPHP) – Very mild or asymptomatic phenotype
HbS/HbE syndrome – Very rare with a phenotype usually similar to HbS/b+ thalassemia
Rare combinations of HbS with other abnormal hemoglobins such as HbD Los Angeles, G-Philadelphia and HbO Arab

Sickle-cell conditions have an autosomal recessive pattern of inheritance from parents. The types of hemoglobin a person makes in the red blood cells depends on what hemoglobin genes are inherited from her or his parents. If one parent has sickle-cell anaemia and the other has sickle-cell trait, then the child has a 50% chance of having sickle-cell disease and a 50% chance of having sickle-cell trait. When both parents have sickle-cell trait, a child has a 25% chance of sickle-cell disease, 25% do not carry any sickle-cell alleles, and 50% have the heterozygous condition.

The allele responsible for sickle-cell anemia can be found on the short arm of chromosome 11, more specifically 11p15.5. A person who receives the defective gene from both father and mother develops the disease; a person who receives one defective and one healthy allele remains healthy, but can pass on the disease and is known as a carrier or heterozygote. Several sickle syndromes occur as the result of inheritance of HbS from one parent and another hemoglobinopathy, such as ” thalassemia or HbC (”2”2 6 Glu’Lys), from the other parent. The prototype disease, sickle cell anemia, is the homozygous state for HbS.

PATHOPHYSIOLOGY:

The sickle cell syndromes are caused by mutation in the ”-globin gene that changes the sixth amino acid from glutamic acid to valine. HbS (”2”2 6 Glu’Val) polymerizes reversibly when deoxygenated to form a gelatinous network of fibrous polymers that stiffen the RBC membrane, increase viscosity, and cause dehydration due to potassium leakage and calcium influx. These changes also produce the sickle shape. The loss of red blood cell elasticity is central to the pathophysiology of sickle-cell disease. Sickled cells lose the flexibility needed to traverse small capillaries. They possess altered ‘sticky’ membranes that are abnormally adherent to the endothelium of small venules.

Repeated episodes of sickling damage the cell membrane and decrease the cell’s elasticity. These cells fail to return to normal shape when normal oxygen tension is restored. As a consequence, these rigid blood cells are unable to deform as they pass through narrow capillaries, leading to vessel occlusion and ischaemia.

These abnormalities stimulate unpredictable episodes of microvascular vasoocclusion and premature RBC destruction (hemolytic anemia). The rigid adherent cells clog small capillaries and venules, causing tissue ischemia, acute pain, and gradual end-organ damage. This venoocclusive component usually influences the clinical course.

The actual anaemia of the illness is caused by hemolysis which occurs because the spleen destroys the abnormal RBCs detecting the altered shape of red cells. Although the bone marrow attempts to compensate by creating new red cells, it does not match the rate of destruction. Healthy red blood cells typically function for 90’120 days, but sickled cells only last 10’20 days.

Clinical Manifestations of Sickle Cell Anemia:

Patients with sickling syndromes suffer from hemolytic anemia, with hematocrits from 15 to 30%, and significant reticulocytosis. Anemia was once thought to exert protective effects against vasoocclusion by reducing blood viscosity. The role of adhesive reticulocytes in vasoocclusion might account for these paradoxical effects.

Granulocytosis is common. The white count can fluctuate substantially and unpredictably during and between painful crises, infectious episodes, and other intercurrent illnesses.

Vasoocclusion causes protean manifestations and cause episodes of ischemic pain (i.e., painful crises) and ischemic malfunction or frank infarction in the spleen, central nervous system, bones, joints, liver, kidneys and lungs.

Syndromes cause by sickle hemoglobinopathy:

Painful crises: Intermittent episodes of vasoocclusion in connective and musculoskeletal structures produce ischemia manifested by acute pain and tenderness, fever, tachycardia and anxiety. These episodes are recurrent and it is the most common clinical manifestation of sickle cell anemia. Their frequency and severity vary greatly. Pain can develop almost anywhere in the body and may last from a few hours to 2 weeks.

Repeated crises requiring hospitalization (>3 episodes per year) correlate with reduced survival in adult life, suggesting that these episodes are associated with accumulation of chronic end-organ damage. Provocative factors include infection, fever, excessive exercise, anxiety, abrupt changes in temperature, hypoxia, or hypertonic dyes.

Acute chest syndrome: Distinctive manifestation characterized by chest pain, tachypnea, fever, cough, and arterial oxygen desaturation. It can mimic pneumonia, pulmonary emboli, bone marrow infarction and embolism, myocardial ischemia, or lung infarction. Acute chest syndrome is thought to reflect in situ sickling within the lung, producing pain and temporary pulmonary dysfunction. Pulmonary infarction and pneumonia are the most common underlying or concomitant conditions in patients with this syndrome. Repeated episodes of acute chest pain correlate with reduced survival. Acutely, reduction in arterial oxygen saturation is especially ominous because it promotes sickling on a massive scale. Chronic acute or subacute pulmonary crises lead to pulmonary hypertension and cor pulmonale, an increasingly common cause of death in patients.

Aplastic crisis: A serious complication is the aplastic crisis. This is caused by infection with Parvovirus B-19 (B19V). This virus causes fifth disease, a normally benign childhood disorder associated with fever, malaise, and a mild rash. This virus infects RBC progenitors in bone marrow, resulting in impaired cell division for a few days. Healthy people experience, at most, a slight drop in hematocrit, since the half-life of normal erythrocytes in the circulation is 40-60 days. In people with SCD however, the RBC lifespan is greatly shortened (usually 10-20 days), and a very rapid drop in Hb occurs. The condition is self-limited, with bone marrow recovery occurring in 7-10 days, followed by brisk reticulocytosis.

CNS sickle vasculopathy: Chronic subacute central nervous system damage in the absence of an overt stroke is a distressingly common phenomenon beginning in early childhood. Stroke is especially common in children and may reoccur, but is less common in adults and is often hemorrhagic. Stroke affects 30% of children and 11% of patients by 20 years. It is usually ischemic in children and hemorrhagic in adults.

Modern functional imaging techniques have indicated circulatory dysfunction of the CNS; these changes correlate with display of cognitive and behavioral abnormalities in children and young adults. It is important to be aware of these changes because they can complicate clinical management or be misinterpreted as ‘difficult patient’ behaviors.

Splenic sequestration crisis: The spleen enlarges in the latter part of the first year of life in children with SCD. Occasionally, the spleen undergoes a sudden very painful enlargement due to pooling of large numbers of sickled cells. This phenomenon is known as splenic sequestration crisis. Over time, the spleen becomes fibrotic and shrinks causing autosplenectomy. In cases of SC trait, the spleenomegaly may persist upto adulthood due to ongoing hemolysis under the influence of persistent fetal hemoglobin.

Acute venous obstruction of the spleen a rare occurrence in early childhood, may require emergency transfusion and/or splenectomy to prevent trapping of the entire arterial output in the obstructed spleen. Repeated microinfarction can destroy tissues having microvascular beds, thus, splenic function is frequently lost within the first 18’36 months of life, causing susceptibility to infection, particularly by pneumococci.

Infections: Life-threatening bacterial infections are a major cause of morbidity and mortality in patients with SCD. Recurrent vaso-occlusion induces splenic infarctions and consequent autosplenectomy, predisposing to severe infections with encapsulated organisms (eg, Haemophilus influenzae, Streptococcus pneumoniae).

Cholelithiasis: Cholelithiasis is common in children with SCD as chronic hemolysis with hyperbilirubinemia is associated with the formation of bile stones. Cholelithiasis may be asymptomatic or result in acute cholecystitis, requiring surgical intervention. The liver may also become involved. Cholecystitis or common bile duct obstruction can occur. Child with cholecystitis presents with right upper quadrant pain, especially if associated with fatty food. Common bile duct blockage suspected when a child presents with right upper quadrant pain and dramatically elevated conjugated hyperbilirubinemia.

Leg ulcers: Leg ulcers are a chronic painful problem. They result from minor injury to the area around the malleoli. Because of relatively poor circulation, compounded by sickling and microinfarcts, healing is delayed and infection occurs frequently.

Eye manifestation: Occlusion of retinal vessels can produce hemorrhage, neovascularization, and eventual detachments.

Renal manifestation: Renal menifestations include impaired urinary concentrating ability, defects of urinary acidification, defects of potassium excretion and progressive decrease in glome”rular filtration rate with advancing age. Recurrent hematuria, proteinuria, renal papillary necrosis and end-stage renal disease (ESRD) are all well recognized.

Renal papillary necrosis invariably produces isosthenuria. More widespread renal necrosis leads to renal failure in adults, a common late cause of death.

Bone manifestation: Bone and joint ischemia can lead to aseptic necrosis, common in the femoral or humeral heads; chronic arthropathy; and unusual susceptibility to osteomyelitis, which may be caused by organisms, such as Salmonella, rarely encountered in other settings.

-The hand-foot syndrome is caused by painful infarcts of the digits and dactylitis.

Pregnancy in SCD: Pregnancy represents a special area of concern. The high rate of fetal loss is due to spontaneous abortion. Placenta previa and abruption are common due to hypoxia and placental infarction. At birth, the infant often is premature or has low birth weight.

Other features: Particularly painful complication in males is priapism, due to infarction of the penile venous outflow tracts; permanent impotence may also occur. Chronic lower leg ulcers probably arise from ischemia and superinfection in the distal circulation.

Sickle cell syndromes are remarkable for their clinical heterogeneity. Some patients remain virtually asymptomatic into or even through adult life, while others suffer repeated crises requiring hospitalization from early childhood. Patients with sickle thalassemia and sickle-HbE tend to have similar, slightly milder symptoms, perhaps because of the bad effects of production of other hemoglobins within the RBC.

Clinical Manifestations of Sickle Cell Trait:

Sickle cell trait is often asymptomatic. Anemia and painful crises are rare. An uncommon but highly distinctive symptom is painless hematuria often occurring in adolescent males, probably due to papillary necrosis. Isosthenuria is a more common manifestation of the same process. Sloughing of papillae with urethral obstruction has been also seen, due to massive sickling or sudden death due to exposure to high altitudes or extremes of exercise and dehydration.

Pulmonary hypertension in sickle hemoglobinopathy:

In recent years, PAH a proliferative vascular disease of the lung, has been recognized as a major complication and an independent correlate with death among adults with SCD. Pulmonary hypertension is defined as a mean pulmonary artery pressure >25mmHg, and includes pulmonary artery hypertension, pulmonary venous hypertension or a combination of both. The etiology is multifactorial, including hemolysis, hypoxemia, thromboembolism, chronic high CO, and chronic liver disease. Clinical presentation is characterized by symptoms of dyspnea, chest pain, and syncope. It is important to note that high cardiac output can also elevate pulmonary artery pressure adding to the complex and multifactorial pathophysiology of PHT in sickle cell disease. Thus, if left untreated, the disease carries a high mortality rate, with the most common cause of death being decompensated right heart failure.

Prevalance and prognosis:

Echocardiographic screening studies have suggested that the prevalence of hemoglobinopathy-associated PAH is much higher than previously known. In SCD, approximately one-third of adult patients have an elevated tricuspid regurgitant jet velocity (TRV) of 2.5 m/s or higher, a threshold that correlates in right heart catheterization studies to a pulmonary artery systolic pressure of at least 30 mm Hg. Even though this threshold represents quite mild pulmonary hypertension, SCD patients with TRV above this threshold have a 9- to 10- fold higher risk for early mortality than those with a lower TRV. It appears that the baseline compromised oxygen delivery and co-morbid organ dysfunction of SCD diminishes the physiological reserve to tolerate even modest pulmonary arterial pressures.

Pathogenesis:

Different hemolytic anemias seem to involve common mechanisms for development of PAH. These processes probably include hemolysis, causing endothelial dysfunction, oxidative and inflammatory stress, chronic hypoxemia, chronic thromboembolism, chronic liver disease, iron overload, and asplenia.

Hemolysis results in the release of hemoglobin into plasma, where it reacts and consumes nitric oxide (NO) causing a state of resistance to NO-dependent vasodilatory effects. Hemolysis also causes the release of arginase into plasma, which decreases the concentration of arginine, substrate for the synthesis of NO. Other effects associated with hemolysis that can contribute to the pathogenesis of pulmonary hypertension are increased cellular expression of endothelin, production of free radicals, platelet activation, and increased expression of endothelial adhesion mediating molecules.

Previous studies suggest that splenectomy (surgical or functional) is a risk factor for the development of pulmonary hypertension, especially in patients with hemolytic anemias. It is speculated that the loss of the spleen increases the circulation of platelet mediators and senescent erythrocytes that result in platelet activation (promoting endothelial adhesion and thrombosis in the pulmonary vascular bed), and possibly stimulates the increase in the intravascular hemolysis rate.

Vasoconstriction, vascular proliferation, thrombosis, and inflammation appear to underlie the development of PAH. In long-standing PH, intimal proliferation and fibrosis, medial hypertrophy, and in situ thrombosis characterize the pathologic findings in the pulmonary vasculature. Vascular remodeling at earlier stages may be confined to the small pulmonary arteries. As the disease advances, intimal proliferation and pathologic remodeling progress, resulting in decreased compliance and increased elastance of the pulmonary vasculature.

The outcome is a progressive increase in the right ventricular afterload or total pulmonary vascular resistance (PVR) and, thus, right ventricular work.

Chronic pulmonary involvement due to repeated episodes of acute thoracic syndrome can lead to pulmonary fibrosis and chronic hypoxemia, which can eventually lead to the development of pulmonary hypertension.

Coagulation disorders, such as low levels of protein C, low levels of protein S, high levels of D-dimers and increased activity of the tissue factor, occur in patients with sickle cell anemia.This hypercoagulable state can cause thrombosis in situ or pulmonary thromboembolism, which occurs in patients with sickle cell anemia and other hemolytic anemias.

Clinical manifestations:

On examination, there may be evidence of right ventricular failure with elevated jugular venous pressure, lower extremity edema, and ascites. The cardiovascular examination may reveal an accentuated P2 component of the second heart sound, a right-sided S3 or S4, and a holosystolic tricuspid regurgitant murmur. It is also important to seek signs of the diseases that are often concurrent with PH: clubbing may be seen in some chronic lung diseases, sclerodactyly and telangiectasia may signify scleroderma, and crackles and systemic hypertension may be clues to left-sided systolic or diastolic heart failure.

Diagnostic evaluation:

The diagnosis of pulmonary hypertension in patients with sickle cell anemia is typically difficult. Dyspnea on exertion, the symptom most typically associated with pulmonary hypertension, is also very common in anemic patients. Other disorders with similar symptomatology, such as left heart failure or pulmonary fibrosis, frequently occur in patients with sickle cell anemia. Patients with pulmonary hypertension are often older, have higher systemic blood pressure, more severe hemolytic anemia, lower peripheral oxygen saturation, worse renal function, impaired liver function and a higher number of red blood cell transfusions than do patients with sickle cell anemia and normal pulmonary pressure.

The diagnostic evaluation of patients with hemoglobinopathies and suspected of having pulmonary hypertension should follow the same guidelines established for the investigation of patients with other causes of pulmonary hypertension.

Echocardiography: Echocardiography is important for the diagnosis of PAH and often essential for determining the cause. All forms of PAH may demonstrate a hypertrophied and dilated right ventricle with elevated estimated pulmonary artery systolic pressure. Important additional information can be obtained about specific etiologies such as valvular disease, left ventricular systolic and diastolic function, intracardiac shunts, and other cardiac diseases.

An echocardiogram is a screening test, whereas invasive hemodynamic monitoring is the gold standard for diagnosis and assessment of disease severity.

Pulmonary artery (PA) systolic pressure (PASP) can be estimated by Doppler echocardiography, utilizing the tricuspid regurgitant velocity (TRV). Increased TRV is estimated to be present in approximately one-third of adults with SCD and is associated with early mortality. In the more severe cases, increased TRV is associated with histopathologic changes similar to atherosclerosis such as plexogenic changes and hyperplasia of the pulmonary arterial intima and media.

The cardiopulmonary exercise test (CPET): This test may help to identify a true physiologic limitation as well as differentiate between cardiac and pulmonary causes of dyspnea but test can only be performed if patient has reasonable functional capacity. If this test is normal, there is no indication for a right heart catheterization.

Right Heart Catheterization: If patient has cardiovascular limitation to exercise, a right heart catheterization should be inserted. Right heart catheterization with pulmonary vasodilator testing remains the gold standard both to establish the diagnosis of PH and to enable selection of appropriate medical therapy. The definition of precapillary PH or PAH requires (1) an increased mean pulmonary artery pressure (mPAP ’25 mmHg); (2) a pulmonary capillary wedge pressure (PCWP), left atrial pressure, or left ventricular end-diastolic pressure ’15 mmHg; and (3) PVR >3 Wood units. Postcapillary PH is differentiated from precapillary PH by a PCWP of ’15 mmHg; this is further differentiated into passive, based on a transpulmonary gradient <12 mmHg, or reactive, based on a transpulmonary gradient >12 mmHg and an increased PVR. In either case, the CO may be normal or reduced. If the echocardiogram or cardiopulmonary exercise test (CPET) suggests PH and the diagnosis is confirmed by catheterization.

Chest imaging and lung function tests: These are essential because lung disease is an important cause of PH. A sign of PH that may be evident on chest x-ray include enlargement of the central pulmonary arteries associated with ‘vascular pruning,’ a relative paucity of peripheral vessels. Cardiomegaly, with specific evidence of right atrial and ventricular enlargement may present. The chest x-ray may also demonstrate significant interstitial lung disease or suggest hyperinflation from obstructive lung disease, which may be the underlying cause or contributor to the development of PH.

High-resolution computed tomography (CT): Classic findings of PH on CT include those found on chest x-ray: enlarged pulmonary arteries, peripheral pruning of the small vessels, and enlarged right ventricle and atrium. High-resolution CT may also show signs of venous congestion including centrilobular ground-glass infiltrate and thickened septal lines. In the absence of left heart disease, these findings suggest pulmonary veno-occlusive disease, a rare cause of PAH that can be quite challenging to diagnose.

CT angiograms: Commonly used to evaluate acute thromboembolic disease and have demonstrated excellent sensitivity and specificity for that purpose.

Ventilation-perfusion Ratio: Scanning done for screening because of its high sensitivity and its role in qualifying patients for surgical intervention. Negative ratio virtually rules out CTEPH, some cases may be missed through the use of CT angiograms.

Pulmonary function test: Isolated reduction in DLco is the classic finding in PAH, results of pulmonary function tests may also suggest restrictive or obstructive lung diseases as the cause of dyspnea or PH.

Evaluation of symptoms and functional capacity (6 Min walk test): Although the 6-minute walk test has not been validated in patients with hemoglobinopathies, preliminary data suggest that this test correlates well with maximal oxygen uptake and with the severity of pulmonary hypertension in patients with sickle cell anemia. In addition, in these patients, the distance covered on the 6-minute walk test significantly improves with the treatment of pulmonary hypertension, which suggests that it can be used in this population.

DYSLIPIDEMIA IN SICKLE HEMOGLOBINOPATHY:

Disorders of lipoprotein metabolism are known as ‘dyslipidemias.’ Dyslipidemias are generally characterized clinically by increased plasma levels of cholesterol, triglycerides, or both, accompanied by reduced levels of HDL cholesterol. Mostly all patients with dyslipidemia are at increased risk for ASCVD, the primary reason for making the diagnosis, as intervention may reduce this risk. Patients with elevated levels of triglycerides may be at risk for acute pancreatitis and require intervention to reduce this risk.

Hundreds of proteins affect lipoprotein metabolism and may interact to produce dyslipidemia in an individual patient, there are a limited number of discrete ‘nodes’ that regulate lipoprotein metabolism. These include:

(1) assembly and secretion of triglyceriderich VLDLs by the liver;

(2) lipolysis of triglyceride-rich lipoproteins by LPL;

(3) receptor-mediated uptake of apoB-containing lipoproteins by the liver;

(4) cellular cholesterol metabolism in the hepatocyte and the enterocyte; and

(5) neutral lipid transfer and phospholipid hydrolysis in the plasma.

Hypocholesterolemia and, to a lesser extent, hypertriglyceridemia have been documented in SCD cohorts worldwide for over 40 years, yet the mechanistic basis and physiological ramifications of these altered lipid levels have yet to be fully elucidated. Cholesterol (TC, HDL-C and LDL-C) levels decreased and triglyceride levels increased in relation to severity of anemia. While not true for cholesterol levels, triglyceride levels show a strong correlation with markers of severity of hemolysis, endothelial activation, and pulmonary hypertension.

Decreased TC and LDL-C in SCD has been documented in virtually every study that examined lipids in SCD adults (el-Hazmi, et al 1987, el-Hazmi, et al 1995, Marzouki and Khoja 2003, Sasaki, et al 1983, Shores, et al 2003, Stone, et al 1990, Westerman 1975),

with slightly more variable results in SCD children. Although it might be hypothesized that SCD hypocholesterolemia results from increased cholesterol utilization during the increased erythropoiesis of SCD, cholesterol is largely conserved through the enterohepatic circulation, at least in healthy individuals, and biogenesis of new RBC membranes would likely use recycled cholesterol from the hemolyzed RBCs. Westerman demonstrated that hypocholesterolemia was not due merely to increased RBC synthesis by showing that it is present in both hemolytic and non-hemolytic anemia (Westerman 1975). He also reports that serum cholesterol was proportional to the hematocrit, suggesting serum cholesterol may be in equilibrium with the cholesterol reservoir of the total red cell mass (Westerman 1975). Consistent with such equilibration, tritiated cholesterol incorporated into sickled erythrocytes is rapidly exchanged with plasma lipoproteins (Ngogang, et al 1989). Thus, low plasma cholesterol appears to be a consequence of anemia itself rather than increased RBC production (Westerman 1975).

Total cholesterol, in particular LDL-C, has a well-established role in atherosclerosis. The low levels of LDL-C in SCD are consistent with the low levels of total cholesterol and the virtual absence of atherosclerosis among SCD patients. Decreased HDL-C in SCD has also been documented in some previous studies(Sasaki, et al 1983, Stone, et al 1990). As in lipid studies for other disorders in which HDL-C is variably low, potential reasons for inconsistencies between studies include differences in age, diet, weight, smoking, gender, small sample sizes, different ranges of disease severity, and other diseases and treatments (Choy and Sattar 2009, Gotto A 2003). Decreased HDL-C and apoA-I is a known risk factor for endothelial dysfunction in the general population and in SCD, a potential contributor in SCD to PH, although the latter effect size might be small (Yuditskaya, et al 2009).

In addition, triglyceride levels have been reported to increase during crisis. Why is increased triglyceride but not cholesterol in serum associated with vascular dysfunction and pulmonary hypertension? Studies in atherosclerosis have firmly established that lipolysis of oxidized LDL in particular results in vascular dysfunction. Lipolysis of triglycerides present in triglyceride-rich lipoproteins releases neutral and oxidized free fatty acids that induce endothelial cell inflammation (Wang, et al 2009). Many oxidized fatty acids are more damaging to the endothelium than their non-oxidized precursors; for example, 13-hydroxy octadecadienoic acid (13-HODE) is a more potent inducer of ROS activity in HAECs than linoleate, the nonoxidized precursor of 13-HODE(Wang, et al 2009). Lipolytic generation of arachidonic acid, eicosanoids, and inflammatory molecules leading to vascular dysfunction is a well-established phenomenon (Boyanovsky and Webb 2009). Although LDL-C levels are decreased in SCD patients, LDL from SCD patients is

more susceptible to oxidation and cytotoxicity to endothelium (Belcher, et al 1999) and an unfavorable plasma fatty acid composition has been associated with clinical severity of SCD (Ren, et al 2006). Lipolysis of phospholipids in lipoproteins or cell membranes by secretory phospholipase A2 (sPLA2) family members releases similarly harmful fatty acids, particularly in an oxidative environment (Boyanovsky and Webb 2009 ) and in fact selective PLA2 inhibitors are currently under development as potential therapeutic agents for atherosclerotic cardiovascular disease(Rosenson 2009). Finally, sPLA2 activity has been linked to lung disease in SCD. sPLA2 is elevated in acute chest syndrome of SCD and in conjunction with fever preliminarily appears to be a good biomarker for diagnosis, prediction and prevention of acute chest syndrome(Styles, et al 2000). The deleterious effects of phospholipid hydrolysis on lung vasculature predicts similar deleterious effects of triglyceride hydrolysis, particularly in the oxidatively stressed environment of SCD.

Elevated triglycerides have been documented in autoimmune inflammatory diseases with increased risk of vascular dysfunction and pulmonary hypertension, including systemic lupus erythematosus, scleroderma, rheumatoid arthritis, and mixed connective tissue diseases(Choy and Sattar 2009, Galie, et al 2005). In fact, triglyceride concentration is a stronger predictor of stroke than LDL-C or TC(Amarenco and Labreuche 2009). Even in healthy control subjects, a high-fat meal induces oxidative stress and inflammation, resulting in endothelial dysfunction and vasoconstriction(O’Keefe, et al 2008). Perhaps having high levels of plasma triglycerides promotes vascular dysfunction, with the clinical outcome of vasculopathy mainly in the coronary and cerebral arteries in the general population, and with more targeting to the pulmonary vascular bed in SCD and autoimmune diseases.

The mechanisms leading to hypocholesterolemia and hypertriglyceridemia in plasma or serum of SCD patients are not completely understood. In normal individuals, triglyceride levels are determined to a significant degree by body weight, diet and physical exercise, as well as concurrent diabetes. Diet and physical exercise very likely impact body weight and triglyceride levels in SCD patients also. These findings indicate that standard risk factors for high triglycerides are also relevant to SCD patients. Mechanisms of SCD-specific risk factors for elevated plasma triglycerides are not as clear. RBCs do not have de novo lipid synthesis (Kuypers 2008). In SCD the rate of triglyceride synthesis from glycerol is elevated up to 4-fold in sickled reticulocytes (Lane, et al 1976), but SCD patients have defects in post absorptive plasma homeostasis of fatty acids (Buchowski, et al 2007). Lipoproteins and albumin in plasma can contribute fatty acids to red blood cells for incorporation into membrane phospholipids (Kuypers 2008), but RBC membranes are not triglyceride-rich and contributions of RBCs to plasma triglyceride levels have not been described. Interestingly, chronic intermittent or stable hypoxia just by exposure to high altitudes, with no underlying disease, is sufficient to increase triglyceride levels in healthy subjects (Siques, et al 2007). Thus, it has also been suggested that hypoxia in SCD may contribute at least partially to the observed increase in serum triglyceride. Finally, there is a known link of low cholesterol and increased triglycerides that occurs in any primate acute phase response, such as infection and inflammation (Khovidhunkit, et al 2004). Perhaps because of their chronic hemolysis, SCD patients have a low level of acute phase response, which is also consistent with the other inflammatory markers. Further studies are required to elucidate the mechanisms leading to hypocholesterolemia and hypertriglyceridemia in SCD.

Pulmonary hypertension is a disease of the vasculature that shows many similarities with the vascular dysfunction that occurs in coronary atherosclerosis (Kato and Gladwin 2008). The similarities and differences are: They both have proliferative vascular smooth muscle cells ‘ just in different vascular beds. They both have an impaired nitric oxide axis, increased oxidant stress, and vascular dysfunction. Most importantly, serum triglyceride levels, previously linked to vascular dysfunction, are definitely shown to correlate with NT-proBNP and TRV and thus, with pulmonary hypertension. Moreover, triglyceride levels are predictive of TRV independent of systolic blood pressure, low transferrin or increased lactate dehydrogenase.

PAH in SCD is also characterized by oxidant stress but in SCD patients plasma total cholesterol (TC) and low density lipoprotein cholesterol (LDL-C) are low. There have been some reports of low HDL cholesterol (HDL-C)17,18 and increased triglyceride in SCD patients ‘ features widely recognized as important contributory factors in cardiovascular disease. These findings and the therapeutic potential to modulate serum lipids with several commonly used drugs prompted us to investigate in greater detail the serum lipid profile in patients with sickle hemoglobinopathy (SH) coming to our hospital and its possible relationship to vasculopathic complications such as PAH.

essay-2016-09-27-000BaY

Gender and Caste – The Cry for Identity of Women

INTRODUCTION

‘Bodies are just not biological phenomena but a complex social creation onto which meanings have been variously composed and imposed according to time and space’. These social creations differentiate the two biological personalities into Man and Woman and meanings to their qualities are imposed on the basis of gender which defines them as He and She.

The question then arises a woman ‘ who is she? According to me, a woman is the one who is empowered, enlightened, enthusiastic and energetic. A woman is all about sharing. She is an exceptional personality who encourages and embraces. If a woman is considered to be a mark of patience and courage then why even today there is a lack of identity in her personality. She is subordinated to man and often discriminated on gender basis.

The entire life of a woman revolves around the patriarchal existence as she is dominated by her father in the childhood, in the other phase of her life she is dominated by her husband and in the later phase by her son, which gives no space to her own independence.

The psychological and physical identity of a woman is defined through the role and control of men: the terrible trait of father-husband-son. The boundary of women is always restrained by male dominance. Gender discrimination is not only a historical concept but it still has its existence in the contemporary Indian Society.

Indian society in every part of its existence experiences the ferocious gender conflict which is everyday projected in the daily newspapers, news channels or even walking on the streets of Indian society. The horror of patriarchal domination exists in every corner of the Indian society. The role of Indian women has always been declining over the centuries.

Turning the pages of history, in the pre-Aryan India God was female and life was being represented in the form of mother Earth. People worshipped the mother Goddess for fertility symbols. The Shakti cult of Hinduism says women as the source and embodiment of cosmic power and energy. Woman power can also be shown through Goddess Durga who lured her husband Shiva from asceticism.

The religious and social condition abruptly changed when the Aryan Brahmins eliminated the Shakti cult and power was given in the hands of male group. They considered the male deities as the husbands of the female goddess providing the dominance in the hands of the male. Marriage was involvement of male control over female sexuality. Even the identity of mother goddess was dominated by the male gods. As Mrinal Pande writes, ‘to control women, it becomes necessary to control the womb and so Hinduism, Judaism, Islam and Christianity have all Stipulated, at one time or another, that the whole area of reproductive activity must be firmly monitored by law and lawmakers’ .

The issue of identity crisis for a woman

The identity of a woman is erased as she becomes a mere reproductive machine ruled and dominated by male laws. From the time she takes birth she is taught that one day, she has to get married and go to her husband’s house. Neither thus she belongs to her own house nor to her husband’s house leaving a mark on her identity. The Vedic times, however proved to be a boon in the lives of women as they enjoyed freedom of choice in aspect of husbands and could marry at mature age. Widows could remarry and women could divorce.

The segregation of women continued to raise the same question of identity as in the Chandogya Upanishad, a religious text of the pre-Buddhist era, contains a prayer of spiritual aspirants which says ‘May I never, ever, enter that reddish, white, toothless, slippery and slimy yoni of the woman’. During this time control over women included reclusion and exclusion and they were even denied education. Women and shudras were treated as the minority class in the society. Rights and privileges given to women were cancelled and girls were married at a very early age. Caste structure also played a great role as women were now discriminated within their own caste on gender basis.

According to Liddle, women were controlled under two aspects: firstly, they were disinherited from ancestral property, economy and were expected to remain under the domestic sphere known as purdah. The second aspect was the control of men over female sexuality. The death rituals of the family members were performed by the sons and no daughter had the right to fire their parent funeral.

A stifling patriarchal shadow hangs over the lives of ladies all through India. From all areas, ranks and classes of society, ladies are casualty of its oppressive, controlling impacts. Those subjected to the heaviest weight of separation are from the Dalit or “Planned Castes”, referred to in less liberal vote based times as the “Untouchables”. The name may have been banned however pervasive negative mentalities of psyche stay, as do the amazing levels of misuse and subjugation experienced by Dalit ladies. They encounter different levels of segregation and misuse, a lot of which is primitive, debasing, horrifyingly vicious and absolutely obtuse. The divisive position framework ‘ in operation all through India, “Old” and “New” ‘ together with biased sexual orientation demeanors, sits at the heart of the colossal human rights manhandle experienced by Dalit or “outcaste” ladies.

The lower positions are isolated from different individuals from the group, precluded from eating with “higher” standings, from utilizing town wells and lakes, entering town sanctuaries and higher rank houses, wearing shoes or notwithstanding holding umbrellas before higher stations; they are compelled to sit alone and use distinctive porcelain in eateries, restricted from cycling a bike inside their town and are made to cover their dead in a different cemetery. They every now and again confront ousting from their territory by higher “overwhelming” stations, compelling them to live on the edges of towns frequently on fruitless area.

This plenty of preference add up to politically-sanctioned racial segregation, and the time has come ‘ long past due ‘ that the “popularity based” legislature of India authorized existing enactment and cleansed the nation of the guiltiness of position and sexual orientation based separation and abuse.

The strategic maneuver of patriarchy soaks each range of Indian culture and offers ascend to an assortment of unfair practices, for example, female child murder, victimization young ladies and shares related passing. It is a noteworthy reason for misuse and manhandle of ladies, with a lot of sexual brutality being executed by men in positions of force. These reach from higher position men damaging lower rank ladies, particularly Dalit; policemen abusing ladies from poor family units; and military men mishandling Dalit and Adivasi ladies in rebellion states, for example, Kashmir, Chhattisgarh, Jharkhand, Orissa and Manipur. Security faculty are ensured by the generally condemned Armed Forces Special Powers Act, which gifts exemption to police and individuals from the military completing criminal demonstrations of assault and to be sure murder; it was proclaimed by the British in 1942 as a crisis measure, to stifle the Quit India Movement. It is an unreasonable law, which needs canceling.

In December 2012 the intolerable posse assault and mutilation of a 23-year-old paramedical understudy in New Delhi, who consequently kicked the bucket from her wounds, collected overall media consideration, putting a transient focus on the risks, persecution and shocking treatment ladies in India confront each day. Assault is endemic in the nation. With most instances of assault going unreported and numerous being released by police, the genuine figure could be 10 times this. The ladies most at danger of misuse are Dalit: the NCRB gauges that more than four Dalit-ladies are assaulted each day in India. An UN study uncovers that “the lion’s share of Dalit ladies report having confronted one or more episodes of verbal misuse (62.4 for every penny), physical attack (54.8 for each penny), inappropriate behavior and strike (46.8 for each penny), aggressive behavior at home (43.0 for every penny) and assault (23.2 for every penny)”. They are subjected to “assault, attack, seizing, snatching, crime physical and mental torment, shameless movement and sexual misuse.”

The UN found that extensive numbers were deterred from looking for equity: in 17 for each penny of occasions of savagery (counting assault) casualties were blocked from reporting the wrongdoing by the police; in more than 25 for each penny of cases the group ceased ladies recording grumblings; and in more than 40 for each penny ladies “did not endeavor to get legitimate or group solutions for the brutality basically out of apprehension of the culprits or social disrespect if (sexual) viciousness was uncovered”. In just 1 for every penny of recorded cases were the culprits sentenced. What “takes after episodes of viciousness”, the UN found, is “a resonating hush”. The impact with regards to Dalit ladies particularly, however not solely, “is the creation and upkeep of a society of brutality, quiet and exemption”.

Class discrimination faced by women of contemporary time

The Indian constitution clarifies the “rule of non-separation on the premise of rank or sexual orientation”. It promises the “privilege to life and to security of life”. Article 46 particularly “shields Dalit from social unfairness and all types of abuse”. Add to this the imperative Scheduled Castes and Tribes (Prevention of Atrocities) Act of 1989, and an around outfitted administrative armed force is framed. Notwithstanding, in view of “low levels of execution”, the UN expresses, “the procurements that secure ladies’ rights must be viewed as vacant of importance”. It is a commonplace Indian story: legal impassion (and cost, absence of access to lawful representation, interminable formality and obstructive staff), police defilement, and government arrangement, in addition to media lack of interest bringing on the significant hindrances to equity and the perception and implementation of the law.

Not at all like white collar class young ladies, Dalit assault casualties (whose numbers are developing) once in a while get the consideration of the rank/class-cognizant urban-driven media, whose essential concern is to advance a Bollywood gleaming, open-for-business picture of the nation.

A 20-year-old Dalit lady from the Santali tribal gathering in West Bengal was group assaulted, supposedly “on the requests of town senior citizens who questioned her relationship (which had been going ahead in mystery for a long time) with a man from an adjacent town in the Bird hum locale”. The savage occurrence happened while, as indicated by a BBC report, the man went to the lady’s home’ with the proposition of marriage, villagers spotted him and sorted out a kangaroo court. Amid the “procedures” the couple were made to sit with situation is anything but hopeful’ the headman of the lady’s town fined the couple 25,000 rupees (400 US dollars; GBP 240) for “the wrongdoing of experiencing passionate feelings for. The man paid, however the lady’s family were not able pay. Subsequently, the “headman” and 12 of his companions more than once assaulted her. Brutality, abuse and prohibition are utilized to keep Dalit ladies in a position of subordination and to keep up the patriarchal grasp on force all through Indian culture.

The urban areas are unsafe spots for ladies, yet it is in the farmland, where a great many people live (70 for each penny) that the best levels of misuse happen. Numerous living in country zones live in amazing neediness (800 million individuals in India live on under 2.50 dollars a day), with practically no entrance to medicinal services, poor instruction and horrifying or non-existent sanitation. It is a world separated from law based Delhi, or Westernized Mumbai: water, power, majority rule government and the tenet of law are yet to venture into the lives of the ladies in India’s towns, which home, Mahatma Gandhi broadly proclaimed, to the spirit of the nation.

Nothing unexpected, then, that following two many years of monetary development, India winds up moping 136th (of 186 nations) in the (sex fairness balanced) United Nations Human Development record’ Harsh thoughts of sexual orientation imbalance

Indian culture is isolated in numerous ways: position/class, sexual orientation, riches and neediness, and religion. Dug in patriarchy and sex divisions, which esteem young men over young ladies and keep men and ladies and young men and young ladies separated, join with tyke marriage to add to the formation of a general public in which sexual misuse and abuse of ladies, especially Dalit ladies, is an adequate piece of ordinary life.

Sociologically and mentally molded into division, schoolchildren separate themselves along sex lines; in numerous territories ladies sit on one side of transports, men another; unique ladies just carriages have been introduced on the Delhi and Mumbai metro, acquainted with shield ladies from inappropriate behavior or “eve teasing” as it is conversationally known. Such wellbeing measures, while being invited by ladies and ladies’ gatherings, don’t manage the basic reasons for misuse, and as it were may promote kindle them.

Assault, sexual brutality, attack and provocation are overflowing, at the same time, with the special case maybe of the Bollywood Mumbai set, sex is a forbidden subject. A survey by India Today directed in 2011 found that 25 for every penny of individuals had no complaint to sex before marriage, giving it’s not in their family.

Sociological partition energizes sex divisions, bolsters biased generalizations and feeds sexual constraint, which numerous ladies’ association trust represents the high rate of sexual viciousness. A recent report, did by the International Center for Research on Women, of men’s mentalities in India towards ladies created some startling measurements: one in four conceded having “utilized sexual brutality (against an accomplice or against any lady)”, one in five reported utilizing “sexual savagery against a stable [female] accomplice”. Half of men would prefer not to see sexual orientation correspondence, 80 for each penny respect evolving nappies, nourishing and washing youngsters to be “ladies’ work”, and a minor 16 for every penny have influence in family obligations. Added to these repressing states of mind of psyche, homophobia is the standard, with 92 for every penny admitting they would be embarrassed to have a gay companion, or even be in the region of a gay man.

With everything taken into account, India is cursed by an inventory of Victorian sex generalizations, fuelled by a position framework intended to oppress, which trap both men and ladies into molded cells of detachment where ruinous thoughts of sex are permitted to age, bringing about blasts of sexual brutality, misuse and man handle. Investigations of position have started to draw in with issues of rights, assets, and acknowledgment/representation, showing the degree to which position must be perceived as key to the account of India’s political advancement. For instance, researchers are getting to be progressively mindful of the degree to which radical masterminds.

Ambedkar, Periyar, and Phule requested the acknowledgment of histories of misuse, custom derision, and political disappointment as constituting the lives of the lower-ranks, even all things considered histories additionally framed the loaded past from which get away was looked for.

Researchers have indicated Mandal as the developmental minute in the “new” national governmental issues of station, particularly for having radicalized dalitbahujans in the politically critical states of the Hindi belt. Hence Mandal may be an advantageous, despite the fact that overdetermined vantage-indicate from which break down the state’s conflicting and incapable interest in the talk of lower-rank qualification, tossing open to examination the political practices and philosophies that enliven parliamentary vote based system in India as a recorded arrangement.

Tharu and Niranjana (1996) have noticed the perceivability of station also, sexual orientation issues in the post-Mandal connection and depict it as a opposing arrangement. Case in point, there were battles by upper-station ladies to challenge reservations by comprehension them as concessions, and the extensive scale investment of school going ladies in the counter Mandal tumult with a specific end goal to claim meet treatment instead of reservations in battles for sexual orientation equality. On the other hand, lower-position male declaration regularly focused on uppercaste ladies, making an uncertain problem for upper-rank women’s activists who had been professional Mandal. The relationship between standing and sexual orientation never appeared to be more cumbersome. The interest for bookings for ladies (and for further reservations for dalit ladies and ladies from the Backward Class and Other Backward Communities) can likewise be seen as an outgrowth of a restored endeavor to address rank and sex issues from inside the landscape of governmental issues. It may likewise demonstrate the inadequacy of concentrating exclusively on sexual orientation in assembling a measurable “arrangement” to the political issue of perceivability and representation.

Rising out of the 33 for each penny bookings for ladies in nearby Panchayat, and plainly inconsistent with the Mandal dissents that compared reservations with ideas of inadequacy, the late requests for reservations is a stamped move far from the verifiable doubt of bookings for ladies. As Mary John has contended, ladies’ powerlessness must be seen with regards to the political removals t h at imprint the emergence of minorities before the state.

The subject of political representation and the plan of gendered defenselessness are associated issues. As I have contended in my exposition incorporated into this volume, such defenselessness is the characteristic of the gendered subject’s peculiarity. It is that type of harmed presence that brings her inside the edge of political readability as various’yet qualified’for general types of review. All things considered, it is basic to political talks of rights and acknowledgment.

Political requests for bookings for ladies’and for lowercaste ladies’supplement academic endeavors to comprehend the profound cleavages between ladies of various positions that contemporary occasions, for example, Mandal or the Hindutva development have uncovered. In investigating the difficulties postured by Mandal to ruling originations of mainstream selfhood, Vivek Dhareshwar indicated conversions between perusing for and recouping the nearness of position as a hushed open talk in contemporary India, and comparable practices by women’s activists who had investigated the unacknowledged weight of gendered personality.

Dhareshwar recommended that scholars of station and scholars of sex may consider elective affinities in their strategies for examination, and deliberately grasp their trashed personalities (position, sexual orientation) with a specific end goal to attract open thoughtfulness regarding them as political characters. Dhareshwar contended this would demonstrate the degree to which secularism had been kept up as another type of upper-rank benefit, the extravagance of overlooking standing, rather than the requests for social equity by dalitbahujans who were requesting an open affirmation of such benefit.

Women and dalit considered the same

Untouchability and Dalit Ladies’ Oppression,” that “It remains a matter of reflection that the individuals who have been effectively required with arranging ladies experience troubles that are no place tended to in a hypothetical writing whose foundational standards are gotten from a sprinkling of standardizing hypotheses of rights, liberal political hypothesis, a not well educated left governmental issues and all the more as of late, every so often, even a well meaning convention of’entitlements.’ Malik in impact requests that how we are comprehend dalit ladies’ defenselessness.

Rank relations are implanted in dalit ladies’ significantly unequal access to assets of essential survival, for example, water and sanitation offices, and in addition to instructive foundations, open spots, and destinations of religious love. Then again, the material impoverishment of dalits and their political disappointment propagate the typical structures of untouchability, which legitimates upper-station sexual access to dalit ladies. Station relations are likewise changing, and new types of viciousness in autonomous India that objective images of dalit freedom such as the defilement of the statues of dalit pioneers, endeavor to counteract dalits’ socio-political progression by dispossessing land, or deny dalits of their political rights are gone for dalits’ apparent social versatility. These fresher types of brutality are regularly supplemented by the sexual harrassment and attack of dalit ladies, indicating the rank and gendered types of helplessness that dalit ladies experience.

As Gabriele Dietrich notes in her exposition “Dalit Movements and Women’s Movements,”* dalit ladies have been focuses of upper-position savagery. In the meantime, dalit ladies have likewise worked as the “property” of dalit men. Lowercaste men are likewise occupied with an unpredictable arrangement of dreams of requital that include the sexual infringement of upper-station ladies in striking back for their weakening by rank society. The risky organization of dalit ladies as sexual property in both occurrences overdetermines dalit ladies’ character in wording exclusively of their sexual accessibility.

Young ladies: Household Servants

At the point when a kid is conceived in most creating nations, companions and relatives shout congrats. A child implies protection. He will acquire his dad’s property and land a position to bolster the family. At the point when a young lady is conceived, the response is altogether different. A few ladies sob when they discover their infant is a young lady on the grounds that, to them, a girl is simply one more cost. Her place is in the home, not in the realm of men. In some parts of India, it’s conventional to welcome a family with an infant young lady by saying, “The worker of your family has been conceived.”

A young lady can’t resist the urge to feel second rate when everything around her advises her that she is worth not exactly a kid. Her character is fashioned when her family and society confine her chances and proclaim her to be inferior.

A blend of amazing neediness and profound inclinations against ladies makes a callous cycle of separation that keeps young ladies in creating nations from satisfying their maximum capacity. It additionally abandons them helpless against extreme physical and psychological mistreatment. These “hirelings of the family” come to acknowledge that life will never be any diverse.

Most prominent Obstacles Affecting Girls

Oppression young ladies and ladies in the creating scene is an overwhelming reality. It results in a huge number of individual tragedies, which signify lost potential for whole nations. Contemplates show there is an immediate connection between a nation’s disposition toward ladies and its encouraging socially and financially. The status of ladies is fundamental to the strength of a general public. On the off chance that one section endures, so does the entirety.

Grievously, female kids are most exposed against the injury of sexual orientation separation. The accompanying impediments are stark case of what young ladies overall face. However, the uplifting news is that new eras of young ladies speak to the most encouraging wellspring of progress for ladies’and men’in the creating scene today.

Endowment

In creating nations, the introduction of a young lady causes awesome change for poor families. At the point when there is scarcely enough nourishment to survive, any tyke puts a strain on a family’s assets. Be that as it may, the financial channel of a little girl feels considerably more serious, particularly in areas where endowment is drilled.

Endowment is merchandise and cash a lady of the hour’s family pays to the spouse’s family. Initially planned to help with marriage costs, share came to be seen as installment to the man of the hour’s family to take on the weight of another lady. In a few nations, endowments are indulgent, costing years of wages, and regularly tossing a lady’s family into obligation. The settlement hone makes the possibility of having a young lady considerably more offensive to poor families. It likewise puts young ladies in threat: another lady is helpless before her in-laws if they choose her settlement is too little. UNICEF gauges that around 5,000 Indian ladies are executed in settlement related occurrences every year.

Disregard

The creating scene is brimming with neediness stricken families who see their girls as a monetary problem. That state of mind has brought about the across the board disregard of child young ladies in Africa, Asia, and South America. In numerous groups, it’s a standard practice to breastfeed young ladies for a shorter time than young men so ladies can attempt to get pregnant again with a kid at the earliest opportunity. Subsequently, young ladies pass up a great opportunity for nurturing nourishment amid an essential window of their advancement, which hinder their development and debilitates their imperviousness to sickness.

Measurements demonstrate that the disregard proceeds as they grow up. Young ladies get less sustenance, medicinal services and less inoculations generally than young men. Very little changes as they get to be ladies. Convention calls for ladies to eat last, regularly decreased to picking over the scraps from the men and young men.

Child murder and Sex-Selective Abortion

In compelling cases, guardians settle on the terrible decision to end their infant young lady’s life. One lady named Lakshmi from Tamil Nadu, a ruined area of India, nourished her child sap from an oleander bramble blended with castor oil until the young lady seeped from the nose and kicked the bucket. “A little girl is dependably liabilities. By what method would I be able to raise a second?” said Lakshmi to disclose why she finished her child’s life. “Rather than her affliction the way I do, I thought it was ideal to dispose of her.”

Sex-specific premature births are much more regular than child murders in India. They are developing always visit as innovation makes it straightforward and shabby to decide an embryo’s sex. In Jaipur, a Western Indian city of 2 million individuals, 3,500 sex-decided premature births are completed each year. The sex proportion crosswise over India has dropped to an unnatural low of 927 females to 1,000 guys because of child murder and sex-based premature births.

China has its own particular long legacy of female child murder. In the most recent two decades, the administration’s notorious one-kid strategy has debilitated the nation’s reputation considerably more. By confining family unit size to restrict the populace, the approach gives guardians only one opportunity to create a desired child before being compelled to pay overwhelming fines for extra youngsters. In 1997, the World Health Organization proclaimed, “‘ more than 50 million ladies were evaluated to miss in China as a result of the standardized slaughtering and disregard of young ladies because of Beijing’s populace control program.” The Chinese government says that sex-specific premature birth is one noteworthy clarification for the amazing number of Chinese young ladies who have just vanished from the populace in the most recent 20 years.

Misuse

Indeed, even after outset, the risk of physical mischief takes after young ladies for the duration of their lives. Ladies in each general public are helpless against misuse. Be that as it may, the danger is more extreme for young ladies and ladies who live in social orders where ladies’ rights mean for all intents and purposes nothing. Moms who do not have their own particular rights have little assurance to offer their girls, a great deal less themselves, from male relatives and other power figures. The recurrence of assault and vicious assaults against ladies in the creating scene is disturbing. Forty-five percent of Ethiopian ladies say that they have been struck in their lifetimes. In 1998, 48 percent of Palestinian ladies confessed to being manhandled by a personal accomplice inside the previous year.

In some societies, the physical and mental injury of assault is aggravated by an extra shame. In societies that keep up strict sexual codes for ladies, if a lady ventures too far out’by picking her own significant other, being a tease in broad daylight, or looking for separation from an injurious accomplice’she has conveyed disrespect to her family and must be restrained. Regularly, teach implies execution. Families submit “honor killings” to rescue their notoriety polluted by defiant ladies.

Shockingly, this “insubordination” incorporates assault. In 1999, a 16-year-old rationally disabled young lady in Pakistan who had been assaulted was brought before her tribe’s legal guidance. Despite the fact that she was the casualty and her aggressor had been captured, the guidance chose she had conveyed disgrace to the tribe and requested her open execution. This case, which got a ton of reputation at the time, is not uncommon. Three ladies succumb to respect killings in Pakistan consistently’including casualties of assault. In zones of Asia, the Middle East, and even Europe, all obligation regarding sexual wrongdoing falls, as a matter of course, to ladies.

Work

For the young ladies who get away from these pitfalls and grow up moderately securely, day by day life is still unfathomably hard. School may be a possibility for a couple of years, however most young ladies are hauled out at age 9 or 10 when they’re sufficiently helpful to work throughout the day at home. Nine million a bigger number of young ladies than young men pass up a major opportunity for school each year, as indicated by UNICEF. While their siblings keep on going to classes or seek after their leisure activities and play, they join the ladies to do the main part of the housework.

Housework in creating nations comprises of persistent, troublesome physical work. A young lady is prone to work from before dawn until the light depletes away. She strolls unshod long separations a few times each day conveying overwhelming pails of water, undoubtedly contaminated, just to keep her family alive. She cleans, grinds corn, accumulates fuel, tends to the fields, washes her more youthful kin, and gets ready suppers until she takes a seat to her own after every one of the men in the family have eaten. Most families can’t manage the cost of current machines, so her undertakings must be finished by hand’squashing corn into dinner with substantial rocks, cleaning clothing against harsh stones, plying bread and cooking gruel over a rankling open flame. There is no time left in the day to figure out how to peruse and compose or to play with companions. She falls depleted every night, prepared to get up the following morning to begin another long workday.

The greater part of this work is performed without acknowledgment or prize. UN measurements demonstrate that despite the fact that ladies create a large portion of the world’s sustenance, they possess just 1 percent of its farmland. In most African and Asian nations, ladies’ work isn’t viewed as genuine work. Should a lady accept an occupation, she is relied upon to keep up every one of her obligations at home notwithstanding her new ones, with no additional assistance. Ladies’ work goes neglected, despite the fact that it is urgent to the survival of every family.

Sex Trafficking

A few families choose it’s more lucrative to send their girls to a close-by town or city to land positions that more often than not include hard work and little pay. That urgent requirement for money leaves young ladies simple prey to sex traffickers, especially in Southeast Asia, where universal tourism pigs out the illicit business. In Thailand, the sex exchange has swelled without register with a primary part of the national economy. Families in little towns along the Chinese fringe are consistently drawn nearer by scouts called “close relatives” who request their girls in return for a long time’s wages. Most Thai agriculturists win just $150 a year. The offer can be excessively enticing, making it impossible to can’t.

essay-2016-06-15-000BHg

Would it be moral to legalise Euthanasia in the UK?: essay help online

The word ‘morality’ seems to be used in both descriptive and normative meanings. More particularly, the term “morality” can be used either (Stanford Encyclopaedia of Philosophy https://plato.stanford.edu/entries/morality-definition

1. descriptively: referring to codes of conduct advocated by a society or a sub-group (e.g. a religion or social group), or adopted by an individual to justify their own beliefs,

or

2. normatively: describing codes of conduct that in specified conditions, should be accepted by all rational members of the group being considered.

Examination of ethical theories applied to Euthanasia

Thomas Aquinas’ natural law considered that morally beneficial actions and the goodness of those actions is assessed against eternal law as a reference point. Eternal law, in his view, is a higher authority and the process of reasoning defines the differences between right and wrong. Natural law thinking is not just concerned with focussed aspects, but considers the whole person and their infinite future. Aquinas would have linked this to God’s predetermined plan for that individual and heaven. The morality of Catholic belief is heavily influenced by natural law. Primary precepts should be considered when considering issues involving euthanasia particularly important key precepts to do good and oppose evil and to preserve life upholding the sanctity of life. Divine law set out in the Bible states that we are created in God’s image and held together by God from our time in the womb. The Catholic Church’s teachings on euthanasia maintain that euthanasia is wrong (Pastoral Constitution, Gaudium et Spes no. 27, 1965) as life is sacred and God-given. (Declaration on Euthanasia 1980). This view can be seen to be just as strongly held and applied today in the very recent case of Alfie Evans where papal intervention in the case was significant and public. Terminating life through euthanasia goes against divine law. Ending life and the possibility of that life bringing love into the world or love coming into the world in response to the person euthanised is wrong. To take a life by euthanasia, according to catholic belief, rejects God’s plan for that individual to live their life. Suicide or intentionally ending life is an equal wrong to murder and as such is to be considered rejection is God’s loving plan (Declaration on euthanasia, 1.3, 1980).

The Catholic Church interprets natural law to mean euthanasia is wrong and that those involved in it are committing a wrongful and sinful act. Whilst the objectives of euthanasia may appear to be good in that they seek to ease suffering and pain they are in fact failing to recognise the greater good of the sanctity of life within God is greater plan and include people other and the person suffering and eternal life in heaven

The conclusions of natural law consider the position of life in general and not just the ending of a single life. An example would be that if euthanasia is lawful older people could become fearful of admission to hospital in case they were drawn into euthanasia. It could also lead to people being attracted to euthanasia at times when they were depressed. This can be seen to attack the principles of living well together in society as good people could be hurt. It also makes some predictions on the slippery slope and floodgates type arguments about hypothetical situations. Euthanasia therefore clearly undermines some primary precepts.

Catholicism accepts the disproportionately onerous treatment is not appropriate towards the end of a person’s life and gives a moral obligation not to strenuously keep a person alive at all costs. An example of this would be the terminally ill cancer patient deciding not to accept further chemotherapy or radiotherapy which could extend their life, but at great cost to quality of that remaining life. Natural law does not seem to prevent them from making these kinds of choices.

There is a doctrine of double effect an example being palliative care with the relief of pain and distress as the objective might have a secondary effect of ending life earlier than if more active treatment options had been pursued. The motivation is not to kill, but rather to ease pain and distress. An example of this is when an individual doctor’s decision to increase opiate drug dosage to the point where respiratory arrest occurs almost inevitably but at all times the intended motivation is the easing of pain and distress. This has on various occasions been upheld as being legally and morally acceptable by the courts and medical watchdogs such as the GMC (General Medical Council).

The catechism of the Catholic Church accepts this and view such decisions as best made by the patient if competent and able and if not by those legally and professionally entitled to act for the individual concerned.

There are other circumstances when the person involved in the process might not be the same type of person as is assumed by natural law. For example, someone with severe brain damage and in a persistent coma or “brain-dead”. In these situations, they may not possess the defining characteristics of a person. This could form justification for euthanasia. The doctors or relatives caring for such a patient may have conflicts of conscience by being unable to show compassion to another and thereby prolong suffering, not only of the patient, but of those surrounding them.

In his book Morals and Medicine published in 1954, Fletcher, the president of the euthanasia Society of America argued that there were no absolute standards of morality in medical treatment and that good ethics demand consideration of patient’s condition and the situation surrounding it.

Fletcher Situation Ethics avoids legalistic consideration of moral decisions. It is anchored only actual situations and specifically in unconditional love for the care of others. When considering euthanasia with this approach it will always “depend upon the situation”.

From the view point of an absolutist, morality is innate from birth. It can be argued that natural law does not change as a result of personal opinions; remaining never changed. Natural law is a positive view with regard to morality as it can be seen to allow people from ranging backgrounds, classes and situations to have sustainable moral laws to follow.

Religious believers also follow the principles of Natural Law as the underlying theology of the law argues the idea that morality remains the same and never changes with an individual’s personal opinions or decisions. Christianity as a religion, has great support amongst its religious believers for there being a natural law of morality. Christian understanding behind this concept has been largely shown to have come as a result of Thomas Aquinas- following his teaching of the close connection of faith and reason being closely related arguments for there being a natural law of morality.

Natural Law has been shown over time to have compelling arguments, one of which being its all-inclusiveness and fixed stature- a contrast to the relative approach to morality. Natural law is objective and is consequently abiding and eternal. It is considered to be within us/innate and is seen to occur as a mixture of faith and reason to go on the form an intelligent and rational being who is faithful in belief of God. Natural law is a part of human nature, commencing from the beginning of our lives when we gain our sense of right and wrong.

However, there are also many disadvantages of natural law with regard to resolving moral problems. They can include, the fact that they are not always self-evident (proving). We are unable to confirm whether there is only one global purpose for humanity. It can be argued that even if humanity had a purpose for its existence, this purpose cannot be seen as self-evident. The perception of natural beings and things is forced to change over generations due to different perceptions, with forms of different times being more fitting with the present culture. It can therefore be argued that absolute morality is changed and altered by cultural beliefs of right and wrong. Some things later on in time being perceived as wrong, leading on to believe that defining what is natural is almost impossible as moral decisions are ever changing. The thought of actuality being better that potentiality, cannot easily transfer to practical ethics. The future holds many potential outcomes, however some of these potential outcomes are ‘wrong’. (Hodder Education, 2016)

Natural law being the best way to resolve moral problems holds a strong argument, however its strict formation means that there is some confusion as to what is right and wrong in certain situations. These views are instead formed by society- not always following the natural law of morality. Darwin’s Theory of Evolution put forward in On The Origin of the Species in 1859, challenged natural law as he put forward the notion that living things strive for survival (survival of the fittest) and supporting his theory of evolution by natural selection. It can be argued that moral problems being solved by natural law may be possible, but not necessarily the best solution.

For many years, euthanasia has been a controversial debate across the globe with different people taking opposing sides and arguing in support of their opinions. Ideally, it is the act of allowing an individual to die in a painless manner by suppressing their medication. Often, these are classified in different forms such as voluntary, involuntary and non-voluntary. However, the legal system has been actively involved in this debate. A major concern put forward is that legalizing any form of euthanasia may lead to slippery slope principle, which holds that permission of anything comparatively harmless today, may begin a trend that results in unacceptable practices. Although one of the popular stands argues voluntary euthanasia is morally acceptable while non-voluntary euthanasia is always wrong, the legal constitution has been split in their decisions in various instances. (Oxford for OCR Religious Studies, 2016)

Voluntary euthanasia is defined by the killing of an individual upon their approval through various ways. The arguments that voluntary euthanasia is morally acceptable are drawn from the expressed desires of a patient. As far as the respect for an individual’s decision does not harm other people, then it is morally correct. Since individuals have the right to make personal choices about their lives, their decisions on how they should die should also be respected. Most, importantly, at times, it remains the only option of assuring the well-being of the patient especially if they are suffering incessant and severe pain. Despite these claims, several cases have emerged, but the court has continued to refuse to uphold the morality of euthanasia irrespective of a victim’s consent. One of these is the case of Diane Pretty who suffered from motor neuron disease. Since she was afraid of dying by choking/aspiration, a common end of life event experienced by many motor neurone disease victims. She sought to have legal assurance that her husband would be free from the threat of prosecution if he assisted her to end her life. Her case went through the Court of Appeal, The House of Lords (the Supreme Court in today’s system) and the European Court of Human Rights. However, due to the concerns raised under the slippery slope principle, the judges denied her request, and she lost the case.

There have been many legal and legislative battles attempting to change the law to support voluntary Euthanasia in varying circumstances. Between 2002 and 2006 Lord Joel Joffe (a Patron of the Dignity in Dying organisation) fought to change the law in the UK to support assisted dying. His first Assisted Dying (Patient) Bill continued to the stage of a second reading (June 2003) however surpassed the time limit to progress to the committee stage. However, Joffe persisted and in 2004 restated his plight with the Assisted Dying for the Terminally Ill Bill which progressed further to the earlier bill to make it to the committee stage in 2006. The committee stated: “In the event that another bill of this nature should be introduced into Parliament, it should, following a formal Second Reading, be sent to a committee of the whole House for examination”. However, unfortunately in May 2006 an amendment at the Second reading lead to the collapse of the bill. This was a surprise to Joffe, with the majority of the select committee on board with the bill. In addition to this calls for a statute supporting voluntary euthanasia have increased and this can be evidenced by the significant numbers of people in recent years travelling to Switzerland where physician assisted suicide is legal under permitted circumstances. Lord Joffe expressed these thoughts in an article written for the campaign for Dignity In Dying cause in 2014 shortly before his death in 2017 in support of Lord Falconer’s Assisted Dying Bill which was a Bill which proposed to permit the “terminally ill, mentally competent adults to have an assisted death after being approved by doctors” (Falconer’s Assisted Dying Bill, Dignity in Dying, 2014). The journey of this bill was followed by the following referenced documentary.

The BBC documentary ‘How to Die: Simon’s Choice’ followed the decline of Simon Binner from motor neurone disease and his subsequent plight for an assisted death. The documentary followed his journey to Switzerland for a legal assisted death and documented the reactions of his surrounding family. During filming of the documentary, a legal bill was being debated in parliament proposing to legalise assisted dying in the United Kingdom. The bill proposed a new law (The Lord Falconers Assisted Dying Bill) which would allow a person to request a lethal injection if they had less that six months left to live, this raised a myriad of issues including precisely defining a life term whereby one has more or less that six months left to live. The Archbishop of Canterbury, Justin Welby urged MP’s to reject the bill stating that Britain would be crossing a ‘legal and ethical Rubicon’ if parliament were to vote to allow the terminally ill to actively be assisted to die at home in the UK under medical supervision. The leaders of the British Jewish, Muslim, Sikh and Christian religious communities wrote a joint open letter to all members of the British parliament urging them to oppose the bill to legalise assisted dying. (The Guardian, 2015). After announcing his death on LinkedIn, Simon Binner died at an assisted dying clinic in Switzerland. The passing of this bill may have been the only way of helping Simon Binner in his home country, although assisted dying was ruled to be unlawful. (Deacon, 2016)

The result of the private members bill, originally proposed by Rob Marris (a Labour MP from Wolverhampton) ended in defeat in 330 MPs against and 118 MPs in favour. (The Financial Times, 2015)

The 1961 Suicide Act (Legislation, 1961) decriminalised suicide, however it didn’t make it morally licit. It outlines that a person who aids, abets, counsels or procures suicide of another/attempt by another to commit suicide shall be liable to be sentenced to a prison term of up to 14 years. It also provided for the situation of a defendant on trial on indictment for murder/manslaughter it is proved that the accused aided, abetted, counselled or procured the suicide of the person in question, the jury could find them guilty of that offence as an alternative verdict.

Many took that the view that the law supports principle of autonomy, but the act was used to reinforce the sanctity of life principle by criminalising any form of assisted suicide. Although the act doesn’t hold the position that all life is equally valuable, there have been cases when allowing a person to die would be the better solution.

In the case of non-voluntary euthanasia, patients are often incapable of giving their approval for death to be induced. It mostly occurs if a patient is either very young, mentally retarded, has an extreme brain damage, or is in a coma. Opponents argue that human life should be respected and in this case, it is even worse because the victim’s wishes are not factored when making decisions to end their life. As a result, it becomes morally wrong irrespective of the conditions that they face. In such a case, all parties involved should wait for a natural death while at the same time according the patient the best palliative medical attention possible. The case of Terri Schiavo who was suffering from bulimia and with an extremely damaged brain falls under this argument. The ruling of the court allowing the request of her husband to have her life terminated triggered heated debates with some arguing that it was wrong while others saw it as a relief since she had spent more than half of her life unresponsive.

I completed primary research in order to support my findings as to whether it would be moral or not to legalise Euthanasia in the UK. With regard to the having an understanding of the correct definition of Euthanasia nine out of ten people who took part in the questionnaire selected the correct definition of physician-assisted suicide being “The voluntary termination of one’s life by administration of a lethal substance with the direct or indirect assistance of a physician” (Medicanet, 2017). The one person who selected the wrong definition believed it to be “The involuntary termination of one’s own life by administration of a lethal substance with the direct or indirect assistance of a physician. The third definition on the questionnaire stated that physician assisted suicide was “The voluntary termination of one’s own life by committing suicide without the help of others”- this definition is the ‘obvious’ incorrect answer and no participant in the questionnaire selected this answer.

The morality of the young should be followed. From the results of my primary research completed by a selected youth audience seventy percent were in agreement that people should have the right to choose when they die. However only twenty percent of this targeted audience were in agreement that they would assist a friend or family member in helping them die. This drop in support can be supported by the fear that prosecution brings of a possible fourteen year imprisonment for assisting in a person’s death.

The effect of the Debbie Purdy case (2009), was that guidelines were established by the Director of Public Prosecutions in England and Wales (Dying or assisted dying isn’t illegal in Scotland however there is no legal way to medically access it). These guidelines were established according to the Director of Public Prosecutions to “clarify what his position is as to the factors that he regards as relevant for and against prosecution” (DID Prosecution Policy, 2010). The guidance policy outlines ‘more likely’ factors as to when prosecution should take place; for prosecution of an assistor the policy outlined that if they had a history of violent behaviour, didn’t know the person, received a financial gain from the act or acted as a medical professional then they were more likely to face prosecution. However despite these factors the policy stated that police and prosecutors of the case should examine any financial gain with a ‘common sense’ approach as many financially benefit from the loss of a loved one, however the fact that they were a close relative being relieved of pain for example should be a larger factor behind assisting someone to die, to be considered in case of prosecution.

Arguments that state voluntary euthanasia is morally right while involuntary euthanasia is wrong, remains as being one of the most controversial issues even in the modern society. It is even more significant because even the legal systems remain split in their ruling in the various cases such as those cited. Based on the slippery slope argument, care should be taken when determining what is morally right and wrong because of the sanctity of human life. Many consider that the law has led to considerable confusion and that one way of developing the present situation is to create a new Act which permitting physician assisted dying, with the proposal stating that there should be a bill to “enable a competent adult who is suffering unbearably as a result of a terminal illness to receive medical assistance to die at his own considered/persistent request… to make provision for a person suffering from a terminal illness to receive pain relief medication” (Assisted Dying for the Terminally ill Bill, 2004).

There is a major moral objection to voluntary euthanasia under the reasoning of the “slippery slope” argument: the fear that what begins as legitimate reasons to assist in a person’s death will also permit death in other illegal circumstances.

In a Letter addressed to The Times newspaper (24/8/04), John Haldane and Alasdair MacIntyre along with other academics, lawyers and philosophers, suggested that any supporters of the Bill change from making the condition one of actual unbearable suffering from terminal illness to merely the fear, discomfort and loss of dignity which terminal illness might bring. In addition, there is an issue of if quality of life is grounds for euthanasia from those who request it therefore it must be open to those who don’t request it or are unable to request it therefore presenting the issue of a slippery slope. Also in the letter addressed to The Times, the esteemed academics referenced Euthanasia in the Netherlands where it is legal. The purpose of this was to infer that many people have dies against their desire due to safeguarding issues. (Hodder Education, 2016)

In conclusion, upon considering different morality arguments on both sides of the debate, I concluded that the two forms of morality (Natural Law and Situation Ethics) would give two opposing responses in relation to the question.

From the viewpoint of a deontologist who would be guided by natural law, duty and obligation arguably from a religion would lead a society to decide that it would be wrong to legalise Euthanasia. However, from the viewpoint of a situational ethicist whose viewpoint would be changeable depending on the independent situation could support the plight to legalise Voluntary Euthanasia in the UK under guidelines to account for differing situations.

After completing my Primary and Secondary Research, considering the passage of many unsuccessful bills put through parliament to legalise euthanasia and many case studies including the moving account of Simon Binner’s fight to die, my own view rests on the side of a situational ethicist who would believe that depending to the independent situation people should be able to have the right to die in their own country by legalising voluntary euthanasia, rather than being forced to travel abroad to access a legal form of voluntary euthanasia and risk their loved ones being prosecuted on their return to the UK for assisting them.

The slippery slope argument does not help those in particular individual situations and it must surely be wrong to shy away from making difficult decisions on the grounds that an individual should sustain prolonged suffering in order to protect society from the possible extended over use of any legalisation. In practice over the past half century some sort of euthanasia has been going on in the UK when doctors give obvious over-dosage of opiates in terminal cases, but have been shielded from the legal consequences by an almost fictional notion that as long as the motivation was to ease and control pain then the inevitable consequence of respiratory arrest (respiratory suppression is a side effect of morphine type drugs), then the action was lawful.

Discredited and now defunct Liverpool Care Pathway for the Dying Patient (LCP) was an administrative tool used as an attempt to assist UK healthcare professionals to manage the care pathway and deciding palliative care options for patients at the very end of life. As with many such tick the-box-exercises individual discretion is restricted in an attempt to standardise practice nationally (Wales was excluded from the LPA). The biggest problem with the LPA (which attracted much adverse media attention and public concern in 2012) was that most patients or their families were not consulted when they were placed on the pathway. It had options for withdrawing active treatment whilst managing distressing symptoms actively. However, removing intravenous hydration/feeding by regarding it as active treatment would inevitably lead to death in a relatively short period of time making the decision to place a patient on the LPA because they were at the end of life a self-fulfilling prophesy. (Liverpool Care Pathway)

There is a chilling consideration of cost of provision of “just in case” boxes at approximately £25 in the last part of this lengthy document should be part of the process of considering what to advise professionals may seem alarming to some. However there is a moral factor in the financial implications of unnecessarily prolonging human life. Should the greater good be considered when deciding to actively permit formal pathways to euthanasia or to take steps to prohibit it (the crimes of murder or assisting suicide). In the recent highly publicised case of Alfie Evans enormous financial resources were used to keep a child with a terminal degenerative neurological disease alive on a paediatric intensive care unit at Alder Hay hospital in Liverpool for around a year. In deciding to do this it is inevitable that those resources were unavailable to treat others who might have gone on to survive and live a life. Huge sums of money were spent both on medical resources and lawyers. The case became a highly media publicised circus resulting in ugly threats made against medical staff at the hospital concerned. There was international intervention in the case by the Vatican and Italy (granting of Italian nationality to the child). Whist the emotional turmoil of the parents was tragic and the case very sad was it moral that their own beliefs and lack of understanding of the medical issues involved should lead to such a diversion of resources and such terrible effects on those caring for the boy?

(NICE (National Institute of Clinical Excellence) guidelines, 2015)

The General Medical Council (GMC) governs the licensing and professional conduct of doctors in the UK. They have produced guidance for doctors regarding the medical role at the end of life Treatment and care towards the end of life: good practice in decision making. It gives comprehensive advice on some of the fundamental issues dealing with the end of life treatment and it covers issues such as living wills (where withdrawal of treatment requests can be set out in writing and in advance). These are binding both professionally, but as ever there are some caveats regarding withdrawal of life prolonging treatment.

It also sets out presumptions of a duty to prolong life and of a patient’s capacity to make decisions along established legal and ethical viewpoints. I particular it is stated that “decisions concerning life prolonging treatments must not be motivated by a desire to bring about a patient’s death” (Good Medical Practice, GMC Guidance to Doctors, 2014)

Formally the Hippocratic Oath was sworn by all doctors and set out a sound basis for moral decision making and professional conduct. In modern translation from the original ancient Greek it states with regard to medical treatment that a doctor should never treat “….. with a view to injury and wrong-doing. Neither will [a doctor] administer a poison to anybody when asked to do so, nor will [a doctor] suggest such a course. Doctors in the UK do not swear the oath today, but most of its principles are internationally accepted except perhaps in the controversial areas surrounding abortion and end of life care.

(Hippocratic Oath, Medicanet)

In conclusion, upon considering different morality arguments on both sides of the debate, I concluded that the two forms of morality (Natural Law and Situation Ethics) would give two opposing responses in relation to the question.

From the viewpoint of a deontologist who would be guided by natural law, duty and obligation arguably from a religion would lead a society to decide that it would be wrong to legalise Euthanasia. However, from the viewpoint of a situational ethicist whose viewpoint would be changeable depending on the independent situation could support the plight to legalise Voluntary Euthanasia in the UK under guidelines to account for differing situations.

After completing my Primary and Secondary Research, considering the passage of many unsuccessful bills put through parliament to legalise euthanasia and many case studies including the moving account of Simon Binner’s fight to die, my own view rests on the side of a situational ethicist who would believe that depending to the independent situation people should be able to have the right to die in their own country by legalising voluntary euthanasia, rather than being forced to travel abroad to access a legal form of voluntary euthanasia and risk their loved ones being prosecuted on their return to the UK for assisting them.

At the end of the day, much of the management of the end of life of patients is not determined by the stipulations laid out by committees in lengthy documents, but by the individual treatment decisions made by individual doctors and nurses who are almost always acting in the best interests of patients and their families. The methodology of accelerating the inevitable event by medication or withdrawal of treatment is almost impossible to standardise across a hospital or local community care setup, let alone a country. It may be a better way to continue the practice of centuries and let the morality and conscience of the treating professions determine what happens and keep the formal moral, religious and legal factors involved in such areas in the shadows.

2018-5-4-1525394652

Has the cost of R & D impacted vaccine development for Covid-19?

Introduction

This report will be investigating and trying to answer the question of: ‘To what extent have the cost requirements of R&D, structure of the industry and government subsidy affected firms in the pharmaceutical industry in developing vaccines for Covid-19?’. The past two years have been very unpredictable for the pharmaceutical industry regarding the breakout of the COVID-19 pandemic. Despite the fact that the pharmaceutical industry has made major contributions to human wellbeing with regards to the reduction of suffering and ill health for over a century, the industry still remains one of the least trusted industries based on public opinions. It is even often compared to the nuclear industry in terms of trustworthiness. Despite being one of the riskiest industries to invest money into, governments have subsidised billions into the production of the COVID-19 vaccines. Regardless of the fact of the associated risks that come with pharmaceuticals, a big part of the public still thinks pharmaceuticals should continue to be produced and developed in order to provide the correct treatment to those with existing health issues (Taylor, 2015). This along with further aspects affecting the requirements of R&D, structure of the industry and government subsidy and how these have affected firms in the pharmaceutical industry with regards to the development of the COVID-19 vaccines will be discussed further in the report.

The Costs of R&D

Back in 2019, $83 billion was spent on R&D. That figure alone is roughly 10 times greater than what the industry spent on R&D in the 1980s. Most of this amount was dedicated to testing and discovering new drugs and clinical testing with regards to safety of the drug. In 2019 drug companies dedicated a quarter of their annual income to R&D which is also an increase of almost double since the early 2000s.

(Pharmaceutical R&D Expenditure Shows Significant Growth, 2019)

Usually the amount spent on R&D of a new drug by drug companies is based on the financial return they expect to make, any policies influencing the supply and demand for drugs and the cost of developing these drugs.

Most drugs that have been approved recently have been specialty drugs. These are drugs that typically treat issues such as complex, chronic or rare conditions and can require patient monitoring. However, specialty drugs are very expensive to develop, pricey for the customer and hard to remake (Research and Development in the Pharmaceutical Industry, 2021).

Government subsidies for the COVID-19 vaccines

There are two main ways in which a federal government can have a direct impact in supporting vaccine development. This is either done by making a promise to purchase a successful vaccine in advance once the firm has successfully achieved its specified goal with the vaccine, or they can cover any costs associated with the R&D of the vaccine.

(Which Companies Received The Most Covid-19 Vaccine R&D Funding?, 2021)

The Department of Health and Human Services in the month of May 2020, launched ‘Operation Warp Speed’. This was a collaborative project in which the FDA, the Department of Defence, the National Institutes of Health and the Centre for Disease Control and Prevention all worked together to provide funding for the COVID-19 vaccine development. Through ‘Operation Warp Speed’, more than $19 billion was provided in funding by the federal government to help seven different private pharmaceutical manufacturers in the development and research of COVID-19 vaccines. A further five out of seven of those went on to accept further funding in order to help these companies boost the production capabilities of the vaccines. Later a sixth company accepted funding in order to help boost the production of another company’s vaccines as they received authorization for emergency use. Then six of the seven also made a deal for an advance purchase. Two of these companies received additional funding as they sold more doses than they expected to during the advance purchase agreements, in order for them to develop even more vaccines to distribute. Due to the simultaneous execution of combining numerous stages of development that in normal cases would be developed in consecutive order, it allowed pharmaceutical manufacturers to reach their end goal and manufacture vaccines at a rate a lot higher than normal when it comes to vaccines. This was done due to the urgency of a solution to the COVID-19 pandemic, as it was starting to cause public uproar and panic amongst nations. As soon as the first COVID-19 diagnoses was made in the US, two vaccines were already at Phase III clinical trials, and this is immensely quick, as it would usually take around a few years of research in order to reach Phase III in clinical trials for a vaccine. The World Health Organisation claims that there were already over 200 COVID-19 vaccine development candidates in the time period of February 2021 (Research and Development in the Pharmaceutical Industry, 2021).

(Research and Development in the Pharmaceutical Industry, 2021)

The image above shows what vaccines were at which stage of development during what time period. This shows the urgency that was there in order to develop and produce these vaccines to fight the outbreak of the coronavirus. Without these government subsidies, firms would have been nowhere near completing the research and development needed in order to produce numerous COVID-19 vaccines. This shows the importance that government subsidies have on the pharmaceutical industry and the development of new drugs and vaccines.

Impact of the structure of the pharmaceutical industry on vaccine development

When it came to the development of the COVID-19 vaccines, many different names in the pharmaceutical industry took part. Now as far as the majority of society is concerned, the pharmaceutical industry is just a small group of large multinational corporations such as GlaxoSmithKline, Novartis, AstraZeneca, Pfizer and Roche. These are frowned upon by the public as they are stereotyped to be the ‘Big Pharma’ and so they can be misleading. Many people have their if’s and doubts about these big multinational corporations especially when they have such an influence on their health and the drugs they develop. It becomes hard for the public to rely and trust these companies because at the end of the day it is their health that they are trusting these companies with. So therefore it is logical that a lot of people will have had and still do have their suspicions about the COVID-19 vaccines developed by a handful of these companies. If you were to ask someone whether or not they have ever heard of companies like Mylan or Teva, they would probably have no clue about them even though Teva is the world’s 11th biggest pharmaceutical company and probably produces the medicine that these people take on a regular basis. The fact that over 90% of pharmaceutical companies are basically almost invisible to the general public obviously means that when it does become known to the public who has manufactured a medicine they are considering taking, for example the Pfizer vaccine, people are going to be careful and suspicious about taking this vaccine as they have probably never heard of the company Pfizer before. All this, despite it being that these companies are responsible for producing a majority of the medicines that everyone takes.

Most new drugs that are produced never even make it onto the market as the drug is found to not work or to have serious side effects, making it unethical to use on patients. However, the small percentage of drugs that do make it onto the market are patented, meaning that the original manufacturer only holds temporary rights to sell the product. Once this has expired, the pharmaceutical is free to sell and manufacture by anyone, meaning it is now a generic pharmaceutical (Taylor, 2015).

This again does not help research pharmaceutical companies, as their developments which are now out of patent, are just being sold by generic pharmaceutical companies where everyone goes to buy their pharmaceuticals. This means generic pharmaceutical companies basically never have a failed product and the research companies are barely able to create a successful product to make it out onto the market. This again causes the public to not even know that the majority of drugs they buy come from these research companies and are not originally procured by the generic pharmaceutical company they buy them from.

As seen with the COVID-19 vaccines, this caused a lot of uncertainty and distress amongst the public as most people had never even heard of companies like ‘Pfizer’ or ‘AstraZeneca’. This in turn made it more difficult for pharmaceutical companies to successfully manufacture and sell their vaccine, prolonging the whole vaccination process.

Due to this structure of the pharmaceutical industry, it has affected firms greatly in their ability to successfully and reliably manufacture vaccines against COVID-19.

Conclusion

Looking at the three factors combined: cost requirements of R&D, structure of the industry and government subsidy, it is clear that these have all had a great impact in the development of the COVID-19 vaccines. The costs associated with R&D in the development of the COVID-19 vaccines, essentially determines how successful the vaccines would be and whether or not they would have enough to first of all do the needed research and then to finally produce and sell them. Without the large number of costs that go into the development of vaccines and other drugs, the COVID-19 vaccines will have never been able to be manufactured and sold. This will have left the world in even more panic and uproar than it was/is. If this would’ve happened, it can easily have a ripple effect on economies, social factors and maybe even potentially other factors such as environmental factors.

One of the biggest impacts on the successful manufacturing and sale of the vaccines was to do with the structure of the industry. With big research pharmaceutical companies putting in all the work and effort to develop these COVID-19 vaccines but with most of the general public not ever even having heard of them before, it made it very hard for pharmaceutical companies to come across as reliable. People didn’t trust the vaccines as they had never heard of the company who developed it, such as Pfizer. This caused debate and protest against these vaccines, making it harder for companies to produce and successfully sell their vaccines to the public who were in need of them and demanded them. This was due to one major flaw in the pharmaceutical industry, which is the fact that companies such as Pfizer and AstraZeneca are kept under the rug and are barely even known by the public as all their products are just taken and sold on by generic pharmaceutical companies where people can buy them from. It also has to do with the fact that research pharmaceutical companies specialise in advanced drugs and not in more generic drugs which are more likely to be successful as they are easier to develop. So naturally the lack of successful products produced will reflect negatively on these companies although the one product they do successfully produce will also be frowned upon due to its previously non viable products.

Then finally, probably the second or joint most important factor is government subsidies. It is quite clear that without the correct government funding and without ‘Operation Warp Speed’ we’d still be in the process of trying to develop even the first COVID-19 vaccine as there will have been nowhere near enough funding for the R&D of the vaccines. This would’ve resulted in the death rate of coronavirus infections to spike, and will have probably put the economy on a complete standstill putting a large number of people out of work. All of this has numerous ripple effects, as just the one issue of loss of work could spike the poverty rate immensely, leaving economies broken. So overall, these three factors have had a huge impact on firms in the pharmaceutical industry in developing the COVID-19 vaccines.

2022-1-5-1641412725

Gender in Design: essay help free

Gender has always had a dominant place in design. Kirkham and Attfield in their 1996 book, The Gendered Object, set out that in their view that there are attributable genders which seem to be unconsciously attached to some objects as the norm. Making the distinction between how gender is viewed in modern day design compared to twenty plus years ago is now radically different in that there is now recognition of this normalization. Having international companies recognise this change and adapt their brands and companies to relate to this modern day approach influences designers like myself to keep up to date and affect my own work.

When designing there is Gender system some people tend to follow very strictly, the system is a guide that works with values that reveals the gender formation in mankind. In the gender system you have binary opposition which takes action in colour, size, feeling and shape, for example pink/blue, small/large, smooth/rough and organic/geometric. Without even thinking the words give off synonyms of male or female without even putting them in context. Gender’s definition is traditionally Male or Female but modern day brands are challenging and pushing these established boundaries. They don’t think they should be restrictive or prescriptive as they have been in the past. Kirkham and Attfield challenge this by comparing perceptions in the early twentieth century illustrating that the societal norms were the opposite to what we are now made to believe by gender norms. A good example of this is the crude binary opposition implicit in ‘pink for a little girl and blue for a boy’ was only established in the 1930’s; babies and parents managed perfectly well without such colour coding before then. Today through marketing and product targeting these ‘definitions’ are even more widely used in the design and marketing of children clothes and objects than a few years ago. Importantly, such binary oppositions also influence those who purchase objects, and, in this case, facilitate the pleasures of many adults take in seeing small humans visibly marked as gendered beings. This is now being further challenged by the demands for non-binary identification.

This initial point made by Kirkham and Attfield in 1996 is still valid. Even though the designers and brands are in essence guilty of forms of discrimination by falling in line with using the established gender norms, they do it because it’s what their consumers want and how they see development of business and creation of profit, because these stereotypical ‘Norms’ are seen to be Normal, acceptable and sub-consciously recognisable. “Thus we sometimes fail to appreciate the effects that particular notions of femininity and masculinity have on the conception, design, advertising, purchase, giving and uses of objects, as well as on their critical and popular reception”. (Kirkham and Attfield. 1996. The Gendered Object, p. 1).

With the help of the product language, gendered toys and clothes appear from an early age. The products are sorted as being ‘for girls’ and ‘for boys’ in the store as identified by Ehrnberger, Rasanen, Ilstedt, in 2012 in the article ‘Visualising Gender Norms in Design. International Journal of Design’. Product language is mostly used in the branding aspect of design, how a product or object is portrayed, it’s not only what the written language says. Product language relates to how the object is being showcased and portrayed through colours, shapes and patterns. A modern example of this is the branding for a Yorkie chocolate bar. Their slogan was publicly known as being gender bias towards mens. ‘Not for girls’, there is no hiding the fact that the language the company are using is being targeted at men because they are promoting a brand that is strong, chunky and ‘hard’ in an unsophisticated way which all have connotations of being ‘male’ and actually arguably as ‘alpha male’ to make it more attractive to men. Their chosen colours also suggest this with using navy blue, dark purple, yellow and red which are bold and is typically a ‘male’ generated pallette. Another example would be the advertisement of tissues. Tissues no matter where you buy them do the exact same thing irrespective of gender so why are some tissues being targeted at woman and some at men, could it be that this gender targeting be avoiding neutrality helps sell more tissues.

Product Language is very gender specific when it comes to clothing brands and toys for kids. “Girls should wear princess dresses, play with dolls and toy housework products, while boys should wear dark clothes with prints of skulls or dinosaurs, and should play with war toys and construction kits”. (Ehrnberger, Rasanen, Ilstedt, 2012. Visualising Gender Norms in Design. International Journal of Design). When branding things for children having the separation between girl and boy is extremely common, using language like ‘action’ which has male connotations or ‘princess’ which has female connotations appeals to the consumer because they are relatable words to them and to their children as well. In modern society most people find it difficult not to identify blue for boys and pink for girls especially from newborns. If you were to walk into any department store/ toy store or any store that caters to children you will see the separation between genders no matter if it is clothes to toys or anything in between. The separation is so obvious through the colour branding used. Girl side, pink, yellow, lilac are used, soft bright happy colours being used on toy babies and dolls to hats and scarfs. Conversely on the boys side blue, green and black, bold, dark, more primary colours being used for trucks to a pair of trousers.

Some companies have begun to notice how detrimental the separation is developing into and how it could possibly create a hold in advancing and opening up our society, example being John Lewis Partnership.

John Lewis is a massive department store, that has been in business for nearly fifty years. In 2017 they decided to scrap the girls section and boys sections for the clothing range in their store, and name it ‘Childs wear’ a gender neutral name. Allowing them to design clothing that allows children to wear whatever they want without being told ‘no, that is a boys top you can’t wear that because you’re a girl’ or vice versa. Caroline Bettis, head of children’s wear at John Lewis, said: “We do not want to reinforce gender stereotypes within our John Lewis collections and instead want to provide greater choice and variety to our customers, so that the parent or child can choose what they would like to wear”. Possibly the only issue with this stance is the price point, John Lewis is typically known for being a higher priced, high street store which means it isn’t accessible for everyone to shop there. Campaign group Let Clothes be Clothes commented on this “Higher-end, independent clothing retailers have been more pro-active at creating gender-neutral collections, but we hope unisex ranges will filter down to all price points. We still see many of the supermarkets, for example, using stereotypical slogans on their clothing,” (http://www.telegraph.co.uk/news/2017/09/02/john-lewis-removes-boys-girls-labels-childrens-clothes/).

Having a very well-known brand make this move should only enforce, encourage and inspire others to join in with the development. This change is a bold way of using Product language, even though it’s not for just one specific thing its advertising and marketing as well, meaning it is a whole rebrand of company, by not using gender specific words it takes away the automatic stereotypes you get when buying anything for children.

Equality is the state of being equal, be it in status, rights or opportunities, so when it comes to design why does this attribute get forgotten about. This isn’t a feminist rant, gender equality is affected in both male and females in the design world, when designing, everything should be equal and fair to both sexes. “Gender equality and equity in design is often highlighted, but it often results in producing designs that highlight the differences between men and women, although both the needs and characteristics vary more between individuals than between genders” (Hyde 2005). Hyde’s point is still contemporary and relevant, having gender equality in design is very important, but gender isn’t the sole issue, things can be designed for a specific gender but even if you are female you might not relate to the gender specific clothes for your sex. Design is to make and create something for someone or thing, not just gender. “Post- feminism argues that in an increasingly fragmented and diverse world, defining one’s identity as male or female is irrelevant, and can be detrimental”. (https://www.cl.cam.ac.uk/events/experiencingcriticaltheory/Satchell-WomenArePeople.pdf).

Recently many more up and coming independent brands and companies have been launching Unisex clothing brands for a multiple of years, most have been doing it and pushing the movement well before the topic of gender equality in design got into mainstream media as an issue. One company pushing out gender norms is Toogood London and another is GFW, Gender Free World. Gender Free World is a company that was created by a group of people who all think on the same wavelength when it comes to equality in gender. In fact their ‘Mission Statement’ sets this out as a core ethos (which incidently is obviously an influence on John Lewis when you look at the transferability of the phraseology) “GFW Clothing was founded in 2015 (part of Gender Free World Ltd) by a consortium of like-minded individuals who passionately believe that what we have in our pants has disproportionately restricted the access to choice of clothing on the high street and online.” https://www.genderfreeworld.com/pages/about-g. Lisa Honan is the cofounder of GFW, her main reason for starting a company like this was through ‘sheer frustration’ due to the lack of options for her taste and style on the market, with this she has shopped in male and female departments but never found anything fitted either especially if she was going for a male piece of clothing. During an interview with Honan by Saner she commented that the men’s shirts didn’t fit her because she had a woman’s body and iIt got her thinking, ‘ why is there a man’s aisle and a woman’s aisle, and why do you have to make that choice?’. She saw that you’re not able to make many purchases without being forced to define your own gender and this is reinforcing the separation between genders in fashion, if she feels this way many others must too, and they do or there wouldn’t be such a potential big business opportunity for it.

In my design practice of Communication Design, gender plays a huge role. Be it from colour choices, to certain typefaces being used, most work Communication designers need to create and produce, will either be to represent a brand or to actually brand a company, so when choosing options, potential gender stereotyping should come into consideration. The points mentioned above, showing how using the gender system, product language, gender norms and having equality and equity in design, reinforces graphic designers in a cautionary manner not to not fall down any pit holes when designing.

Designing doesn’t mean simply male or female, designing means to create and produce ‘something’ for ‘someone’ no matter their identifiable or chosen gender. If they are a company producing products targeted specifically at men and after a robust design concept examination I felt that using blue would enhance their brand and awareness to their target demographic then blue would be used, in just the same way using pink for them if it works for the customer, then put simply it works.

To conclude, exploring the key points of gender in the design world, only showcases the many issues there are.

2017-12-11-1513023430

The stigma surrounding mental illness: essay help free

Mental illness is defined as a health problem resulting from complex interactions between an individual’s mind, body and environment which can significantly affect their behavior, actions and thought processes. A variety of mental illnesses exist, impacting the body and mind differently, whilst affecting the individual’s mental, social and physical wellbeing to varying degrees. A range of psychological treatments have been developed in order to assist people living with mental illness, however social stigma can prevent individuals from successfully engaging with these treatments. Social or public stigma is characterized by discriminatory behavior and prejudicial attitudes towards people with mental health problems resulting from the psychiatric label they possess (Link, Cullen, Struening & Shrout, 1989). The stigma surrounding labelling oneself with a mental illness causes individuals to hesitate in regards to seeking help as well as resistance to treatment options. Stigma and its effects can vary depending on demographic factors including age, gender, occupation and community. There are many strategies in place to attempt to reduce stigma levels which focus on educating people and changing their attitudes towards mental health.

Prejudice, discrimination and ignorance surrounding mental illnesses results in a public stigma which has a variety of negative social effects towards individuals with mental health problems (Thornicroft et al 2007). An understanding of how stigma can be gained through the Attribution Model which identifies four steps involved in the formation of a stigma (Link & Phelan, 2001). The first step in the formation of a stigma is ‘labelling’, whereby key traits are recognized as portraying a significant difference. The next step is ‘stereotyping’ whereby these differences are defined as undesirable characteristics followed by ‘Separating’ which makes a distinction between ‘normal’ people versus the stereotyped group. Stereotypes surrounding mental illnesses have been developing for centuries, with early beliefs being that individuals suffering from mental health problems were possessed by demons or spirits. ‘Explanations’ such as these, promoted discrimination within the community, preventing individuals from admitting any mental health problems due to a fear of retribution (Swanson, Holzer, Ganju & Jono, 1990). The final step in the Attribution model described by Link and Phelan is ‘Status Loss’ which leads to the devaluing and rejection of individuals in the labelled group (Link & Phelan, 2001). An individual’s desire to avoid the implications of public stigma causes them to avoid or drop out of treatment for fear of being associated with negative stereotypes (Corrigan, Druss and Perlick, 2001). One of the main stereotypes surrounding mental illness, especially depression, and Post Traumatic Stress Disorder is that people with these illnesses are dangerous and unpredictable (Wang & Lai, 2008). Wang and Lai carried out a survey whereby 45% of participants considered people with depression as dangerous, however these results maybe subject to some reporting bias, yet a general inference can be made. Another survey found that a large proportion of people also confirmed that they were less likely to employ someone with mental health problems (Reavley & Jorm, 2011). This study highlights how public stigma can affect employment opportunities, consequently creating a greater barrier for anyone who would benefit from seeking treatment.

Certain types of stigma are unique and consequently more severe to certain groups within society. Approximately 22 soldiers or veterans commit suicide every day in the United States due to Post Traumatic Stress Disorder (PTSD) and depression. A study was performed surveying soldiers and found that out of all the people who met the criteria for a mental illness, only 38% would be interested in receiving help and only 23-30% actually ended up receiving professional help (Hoge et al, 2004). There is an enormous stigma surrounding mental illness within the military, due to their high values in mental fortitude, strength, endurance and self sufficiency (Staff, 2004). A soldier who admits to having mental health problems is deemed as not adhering to these values thus appearing weak or dependent, therefore placing a greater pressure on the individual to deny or hide any mental illness. Another contributor to soldiers avoiding treatment is a fear of social exclusion as it is common in military culture for some personnel to socially distance themselves from soldiers with mental health problems (Britt et al, 2007). This exclusion is due to the stereotype that mental health problems make a soldier unreliable, dangerous and unstable. Surprisingly, individuals with mental health problems who seek treatment are deemed more emotionally unstable than those who do not, thus the stigma surrounding therapy creates a barrier for individuals to start or continue their treatment (Porath, 2002). Furthermore, soldiers are also faced with the fear that seeking treatment will negatively affect their career, both in and out of the military, with 46 percent of employers considering PTSD as an obstacle when hiring veterans in a 2010 survey (Ousley, 2012). The stigma associated with mental illness in the military is extremely detrimental to the soldiers’ wellbeing as it prevents them from seeking or successfully engaging in the treatment for mental illnesses which have tragic consequences.

Adolescents and young adults with mental illness have the lowest rate for seeking professional help and treatment, despite the high occurrence of mental health problems. (Rickwood, Deane & Wilson, 2007). Adolescents’ lack of willingness to seek help and treatment for mental health problems is catalyzed by the anticipation of negative responses from family, friends and school staff. (Chandra & Minkovitz, 2006). A Queensland study of people aged 15–24 years showed that 39% of the males and 22% of the females reported that they would not request help for emotional or distressing problems (Donald, Dower, Lucke & Raphael, 2000). A 2010 survey of adolescents with mental health problems found that 46% described experiencing feelings of distrust, avoidance, pity and prejudice from family members. This portrays how negative family responses and attitudes impact an individual by creating a significant barrier to seeking help (Moses, 2010). Similarly, a study on adolescent depression also noted that teenagers who felt more stigmatized, particularly within the family, were less likely to seek treatment (Meredith et al., 2009). Furthermore, adolescents with unsupportive parents would struggle to pay expenses for treatment and transportation, further preventing successful treatment of the illness. Unfortunately, the generation of stigma is not unique to just family members, adolescents also report having felt discriminated by peers and even school staff (Moses, 2010). The main step to seeking help and engaging in treatment for mental illness is to acknowledge that there is a problem and to be comfortable enough to disclose this information to another person (Rickwood et al, 2005). However, in another 2010 study of adolescents, many expressed fear of being bullied by peers, subsequently leading to secrecy and shame (Kranke et al., 2010). The role of public stigma in generating this shame and denial is significant and thus can be defined as a factor in preventing adolescents from seeking support for their mental health problems. A 2001 study testing the relationship between adherence to medication (in this case, antidepressants) and perceived stigma levels determined that individuals who accepted the antidepressants were found to have lower perceived stigma levels (Sirey et al, 2001). This empirical data clearly illustrates the correlation between public stigma levels and an individual’s engagement in treatment, thus inferring that stigma remains a barrier for treatment. Public stigma can therefore be defined as a causative factor in the majority of adolescents not seeking support or treatment for their mental health problems.

One of the main strategies performed by society to assist in the reduction of the public stigma surrounding mental illness is education. Educating people about the common misconceptions of mental health challenges the inaccurate stereotypes and substitutes them with factual information (Corrigan et al., 2012). There is sufficient proof that people who have more information about mental health problems are less stigmatizing than people who are misinformed about them (Corrigan & Penn, 1999). The low cost and far-reaching nature are beneficial aspects of the educational approach. Educational approaches are often carried out on adolescents as it is believed that by educating children about mental illness, stigma can be prevented from emerging in adulthood (Corrigan et al., 2012). A 2001 study testing the effect of education on 152 students found that levels of stigmatization were lessened following the implementation of the strategy (Corrigan et al, 2001). However, it was also determined that by combining a contact based approach with the educational strategy would yield the highest levels of stigma reduction. Studies have also shown that a short educational program can be effective at reducing individuals’ negative attitudes toward mental illness and increases their knowledge on the issue (Corrigan & O’Shaughnessy, 2007). The effect of an educational strategy varies depending on what type of information is being communicated towards people. The information provided should deliver realistic descriptions of mental health problems and their causes as well as emphasizing the benefits of treatment. By delivering accurate information to people, the negative stereotypes surrounding mental illness can be decreased and the publics views on the controllability and treatment of psychological problems can be altered (Britt et al, 2007). Educational approaches mainly focus on improving knowledge and attitudes surrounding mental illness and do not focus directly on changing behavior. Therefore, a link cannot be clearly made as to whether educating people actually reduces discrimination. Although this remains a major limitation in today’s society, educating people at an early age can ensure that in the future discrimination and stigmatization will decrease. Reducing the negative attitudes surrounding mental illness can encourage those suffering from mental health problems to seek help. Providing individuals with correct information regarding the mechanisms and benefits of treatment, such as psychotherapy or drugs like antidepressants, increases their own mental health literacy and therefore increases the likelihood of seeking treatment (Jorm and Korten, 1997). People who are educated about mental health problems are less likely to believe or generate stigma surrounding mental illnesses and therefore contribute to reducing stigma which in turn will increase levels of successful treatment for themselves or other individuals.

The public stigma surrounding mental health problems is defined by negative attitudes, prejudice and discrimination. This negativity in society is very debilitating towards any individual suffering from mental illness and creates a barrier for seeking out help and engaging in successful treatment. The negative consequences of public stigma for individuals is to be excluded, not considered for a job or for friends and family to become socially distant. By educating people about the causes, symptoms and treatment of mental illnesses, stigma can be reduced as misinformation is usually a key factor in the promotion of harmful stereotypes. An individual will more likely engage in successful treatment if they are accepting of their illness and if stigma is reduced.

2016-10-9-1475973764

Frederick Douglass, Malcolm X and Ida Wells

Civil Rights are “the rights to full legal, social, and economic equality” . Following the American Civil War, slavery was officially abolished December 6th, 1865 in the United States of America (US). The Fourteenth and Fifteenth Amendments established a legal framework for political equality for African Americans; many thought that this would lead to equality between white and blacks however this was not the case. Despite slavery’s abolition Jim Crow racial segregation in the South meant that blacks would be denied political rights and freedoms and they would continue to live in poverty and inequality. It took nearly 100 years of campaigning until the Civil Rights and Voting Rights Acts were passed, making it illegal to discriminate based on race, colour, religion, sex or national origin and ensuring minority voting rights. Martin Luther King was prominent in the Modern Civil Rights Movement (CRM), playing a key role in legislative and social change. His assassination in 1968 marked the end of a distinguished life helping millions of African Americans across the US. The contribution played by black activists including political Frederick Douglass, militant Malcolm X and journalist Ida Wells throughout the period will be examined from a political, social and economic, perspective. When comparing their significance to that of King, consideration must be given to the time in which activists were operating and to prevailing social attitudes. Although King was undeniably significant it was the combined efforts of all the black activists and the mass protest movement in the mid-20th century that eventually led to African Americans gaining civil rights.

The significance of King’s role is explored through Clayborne Carson’s, ‘The Papers of Martin Luther King’ (Appendix 1). Carson, a historian at Stanford University, suggests that “the black movement would probably have achieved its major legislative victory without King’s leadership” Carson does not believe King was pivotal in gaining civil rights, but that he quickened the process. The mass public support shown in the March on Washington, 1963, suggests that Carson is correct in arguing that the movement would have continued its course without King. However, it was King’s oratory skill in his ‘I have a Dream’ speech that was most significant. Carson suggests key events would still have taken place without King. “King did not initiate…” the Montgomery bus boycott rather Rosa Parks did. His analysis of the idea of a ‘mass movement’ furthers his argument of King’s less significant role. Carson suggests that ‘mass activism’ in the South resulted from socio-political forces rather than ‘the actions of a single leader’. King’s leadership was not vital to the movement gaining support and legislative change would have occurred regardless. The source’s tone is critical of his significance but passive in the dismissal of King’s role. Phrases such as “without King” are used to diminish him in a less aggressive manner. Carson, a civil rights historian with a PhD from UCLA has written books and documentaries including ‘Eyes on the Prize’ and so is qualified to judge. The source was published in 1992 in conjunction with King’s wife, Coretta, who took over as head of the CRM after King’s assassination and extended its role to include women’s rights and LGBT rights. Although this may make him subjective, he attacks King’s role suggesting he presents a balanced view. Carson produced his work two decades after the movement and three decades before the ‘Black Lives Matter’ marches of the 21st century, and so was less politically motivated in his interpretation. The purpose of his work was to edit and publish the papers of King on behalf of The King Institute to show King’s life and the CRM he inspired. Overall, Carson argues that King had significance in quickening the process of gaining civil rights however he believes that without his leadership, the campaigning would have taken a similar course and that US mass activism was the main driving force.

In his book ‘Martin Luther King Jr.’ (Appendix 2) historian Peter Ling argues, like Carson, that King was not important to the movement but differs suggesting it was other activists who brought success and not mass activism. Ling believes that ‘without the activities of the movement’ King might just have been another ‘Baptist preacher who spoke well.’ It can be inferred that Ling believes King was not vital to the CRM and was just a good orator.

Ling’s reference to activist Ella Baker 1903-86 who ‘complained that “the movement made Martin, not Martin the Movement”’ suggests the King’s political career was of more importance to him than the goal of civil rights. Baker told King she disapproved of his being hero worshipped and others argued that he was ‘taking too many bows and enjoying them’. Baker promoted activists working together, as seen through her influence in the Student Nonviolent Coordinating Committee (SNCC). Clearly many believed King was not the only individual to have an impact on the movement, and so Ling’s argument that multiple activists were significant is further highlighted.

Finally, Ling argues that ‘others besides King set the pace for the Civil Rights Movement’ which explicitly shows how other activists working for the movement were the true heroes, they orchestrated events and activities yet it was King that benefitted. However King himself suggested that he was willing to use successful tactics suggested by others. The work of activists such as Philip Randolph who organise the 1963 March highlights how individuals played a greater role in moving the CRM forward than King. The tone attacks King using words such as ‘criticisms’ to diminish King’s role. Ling says that he has ‘sympathy’ for Miss Baker showing his positive tone towards other activists.

Ling was born in the UK studying History at Royal Holloway College and a MA in American Studies, Institute United States Studies, London. This gives Ling an international perspective, making him less subjective as he has no political motivations nevertheless this makes his interpretation limited in that he has no primary knowledge of civil rights in the US. The book was published in 2002 consequently this gives Ling hindsight making his judgment more accurate and less subjective as he is no longer affected by King’s influence. Similarly, his knowledge of American history and the CRM makes his work accurate. Unlike Carson who was a black activist and attended the 1963 March, White Ling was born in 1956 and was not involved with the CRM and so will have a less accurate interpretation. A further limitation is his selectivity; he gives no attention to the successes of King, including his inspiring ‘I had a dream speech’. As a result, it is not a balanced interpretation and thus its value is limited.

Overall, although weaker than Carson’s interpretation, Ling does give an argument that is of value when understanding King’s significance. Both revisionists, the two historians agree that King was not the most significant reason to gaining civil rights however differ on who or what they see as more important. Carson argues that mass activism was vital in success whereas Ling believes it to be other activists.

A popular pastor in the Baptist Church, King was the leader of the CRM when it gained black rights successes in the 1960s. He demonstrated the power of the church and NAACP in the pursuit of civil rights His oratory skills ensured many blacks and whites attended the protests and increased support. He understood the power of the media in getting his message to a wide audience and in putting pressure on the US government. The Birmingham campaign 1963, where peaceful protestors including children were violently attacked by police and his inspirational ‘Letter from Birmingham Jail’ that King wrote were heavily publicised. US society gradually sympathised with the black ‘victims’. Winning the Nobel Peace Prize gained the movement further international recognition. King’s leadership was instrumental in the political achievements of the CRM, inspiring the grassroots activism needed to apply enough pressure on government, which behind the scenes activists like Baker had worked tirelessly to build. Nevertheless there had been a generation of activists who played their parts often through the church publicising the movement, achieving early legislative victories and helping to kick-start the modern CRM and the idea of nonviolent civil disobedience. King’s significance is that he was the figurehead of the movement at the time when civil rights were eventually given.

Pioneering activist Frederick Douglass 1818-95 had political significance to the CRM holding federal positions which enabled him to influence government and Presidents throughout the Reconstruction era. He is often called the ‘father of the civil rights movement’. Douglass held several prominent roles including US Marshall for DC. He was the first black to hold high office in government and in 1872 the first African American nominated for US Vice President particularly significant as blacks’ involvement in politics was severely restricted at the time. Like King he was a brilliant orator, lecturing on civil rights in the US and abroad. When compared to King Douglass was significant in the CRM. He promoted equality for blacks and whites, although unlike King he did not ultimately achieve black civil rights this was because he was confined by the era that he lived.

The contribution of W.E.B Du Bois 1868-1963 was significant as he laid the foundations for future black activists, including King, to build. In 1909 he established The National Association for the Advancement of Coloured People (NAACP) the most important 20th century black organisation other than the church. King became a member of NAACP and used it to organise the bus boycott and other mass protests. As a result, the importance of Du Bois to the CRM is that King’s success depended on NAACP therefore Du Bois is of similar significance, if not more so than King in pursuing black civil rights.

Ray Stanard Baker’s article in 1908 for The American Magazine speaks of Du Bois’ enthusiastic attitude to the CRM, his intelligence and knowledge of African Americans. (Appendix 3) The quotation of Du Bois at the end of the extract reads “Do not submit! agitate, object, fight,” showing he was not passive but preaching messages of rebellion. The article describes him with vocabulary such as “critical” and “impatient” showing his radical passionate side. Baker also states Du Bois’ contrasting opinions compared to Booker T Washington one of his contemporary black activists. This is evident when it says “his answer was the exact reverse of Washington’s” demonstrating how he was different to the passive, ‘education for all’ Washington. Du Bois valued education, but believed in educating an elite few, the ‘talented tenth’ who could strive for rapid political change. The tone is positive towards Du Bois praising him for being a ferocious character dedicated to achieving civil rights. Through phrases such as “his struggles and his aspirations” this dedicated and praising tone is developed. The American Magazine founded in 1906 was an investigative US paper. Many contributors to the magazine were ‘muckraking’ journalists meaning that they were reformists who attacked societal views and traditions. As a result, the magazine would be subjective, favouring radical Du Bois’, challenging the Jim Crow South and appealing to its radical target audience. The purpose of the source was to confront the racism in the US and so would be political motivated making it subjective regarding civil rights. However some evidence suggests that Du Bois was not radical, his Paris Exposition in 1900 showed the world real African Americans. Socially he made a major contribution to black pride contributing to the black unity felt during the Harlem Renaissance. The Renaissance popularised black culture and so was a turning point in the movement, in the years after the CRM grew in popularity and became a national issue. Finally, the source refers to his intelligence and educational prowess; he carried out economic studies for the US Government and was educated at Harvard and abroad. As a result, it can be inferred that Du Bois rose to prominence and made a significant contribution to the movement due to his intelligence and his understanding of US society and African American culture. One of the founders of the NAACP his significance in attracting grassroots activists and uniting black people was vital. The NAACP leader Roy Wilkins at the March on Washington highlighted his contribution following his death the day before, and said, “his was the voice that was calling you to gather here today in this cause.” Wilkins is suggesting that Du Bois had started the process which had led to the March.

Rosa Parks 1913-2005 and Charles Houston 1895-1950 were NAACP activists who benefitted from the work of Du Bois and achieved significant political success in the CRM. Parks the “Mother of the Freedom Movement.” was the spark that ignited the modern CRM by protesting on a segregated bus. Following her refusal to move to the black area she was arrested. Parks, King and NAACP members staged a yearlong bus boycott in Montgomery. Had it not been for Parks, King may never have had the opportunity to rise to prominence or had mass support for the movement and so her activism was key in shaping King. Lawyer Houston helped defend black Americans, breaking down the deep rooted discriminative and segregation laws in the South. It was his ground-breaking use of sociological theories that formed the basis of the Brown v. the Board of Education 1954 that ended segregation in schools. Although compared to King, Houston is less prominent; his work was significant in reducing black discrimination gaining him the nickname ‘The man who killed Jim Crow ‘. Nonetheless had Du Bois’ NAACP not existed, Parks and Houston would never have had an organisation to support them in their fight, likewise King would never have gained the mass support for civil rights.

Trade unionist Philip Randolph 1890-1979 brought about important political changes. His pioneering use of nonviolent confrontation had a significant impact on the CRM and was widely used throughout 1950’s and 60’s. Randolph had become a prominent civil rights spokesman after organising the Brotherhood of Sleeping Car Porters in 1925, the first black majority union. Mass unemployment after the US Depression led to civil rights becoming a political issue and US trade unions supported equal rights and black membership grew. Randolph was striving for political change that would bring equality. Aware of his influence in 1941 he threatened a protest march which pressured President Roosevelt into issuing Executive Order 8802 an important early employment civil rights victory. There was a shift in the direction of the movement focussing on the military because after the Second World War black soldiers felt disenfranchised and became the ‘foot soldiers of the CRM’ fighting for equality in these mass protests. Randolph led peaceful protests which resulted in President Truman issuing Executive Order 9981 desegregating of the Armed Forces showing his key political significance. Significantly this legislation was a catalyst leading to further desegregation laws. His contribution to the CRM, support of King’s leadership and masterminding of the 1963 March made his significance equal to King’s.

King realised that US society needed to change and inspired by Ghandi he too used non-violent mass protest to bring about change, including the Greensboro Sit-ins to de-segregate lunch counters. Similarly activist Booker T Washington 1856-1915 significantly improved the lives of thousands of southern blacks who were poorly educated and trapped in poverty following Reconstruction through his pioneering work in black education. He founded the Tuskegee Institute. In his book ‘Up from Slavery: An Autobiography’ (Appendix 4) he suggests that gaining civil rights would be difficult and slow, but all blacks should work on improving themselves through education and hard work to peacefully push the movement forward. He says that “the according of the full exercise of political rights” will not be an “overnight gourdvine affair” and that a black should “deport himself modestly in regard of political claim”. Inferring that Washington wanted peaceful protest and acknowledged the time it would take to gain equality, making his philosophy like King’s. Washington’s belief in using education to gain the skills to improve lives and fight for equality is evident through the Tuskegee Institute which educated 2000 blacks a year.

The tone of the source is peaceful, calling for justice in the South. Washington uses words such as “modestly” in an attempt for peace and “exact justice” to show how he believes in equal political rights for all. The reliability of the source is mixed. Washington is subjective as he wants his autobiography to be read, understood and supported. The intended audience would have been anyone in the US, particularly blacks whom Washington wanted to inspire to protest and white politicians who would advance civil rights. The source is accurate, it was written in 1901, during the Jim Crow South. Washington would have been politically motivated in his autobiography; demanding legislative change to give blacks civil rights. There would have also been an educational factor that contributed to his writing, his Tuskegee Institute and educational philosophy, having a deep impact on his autobiography.

The source shows how and why the unequal South should no longer be segregated. Undoubtedly significant, as his reputation grew he became an important public speaker and is considered to have been a leading spokesman for black people and issues like King. An excellent role model a former slave who influenced statesmen he was the first black to dine with the President (Roosevelt) at the White House showing blacks they could achieve anything. Activist Du Bois described him as “the one recognised spokesman of his 10 million fellows … the most striking thing in the history of the American Negro”. Although not as decisive in gaining civil rights as King, Washington was important in preparing blacks for urban and working life but also empowering the next generation of activists.

Inspired by Washington the charismatic Jamaican radical activist Marcus Garvey 1880-1940 arrived in the US in 1916. Garvey had a social significance to the movement striving to better the lives of US blacks. He rose to prominence during the ‘Great Migration’ when poor southern blacks were moving to the industrial North, making Southern race problems into national ones. He founded the Universal Negro Improvement Association (UNIA) which had over 2,000,000 members in 1920. He appealed to discontented First World War black soldiers who had returned home to violent racial discrimination. The importance of the First World War was paramount in enabling Garvey to gain the vast support he did in the 1920s. Garvey published a newspaper, the Negro World which spread his ideas about education and Pan-Africanism, the political union of all people of African descent. Garvey like King gained a greater audience for the CRM, in 1920 he led an international convention in Liberty Hall, and 50,000 parade through Harlem. Garvey inspired later activists such as King.

2018-7-12-1531405547

Reflective essay on use of learning theories in the classroom: college application essay help

Over recent years teaching theories have been more common in the class room, all in the hope of supporting students and been able to further their knowledge by understanding their abilities and what they need to develop. As a teacher it is important to embed teaching and learning theories in the class room, therefore as teachers we can teach the students to their individual needs.

Throughout my research I will be looking in to the key differences of two different theories by comparing two theories used in class rooms today. I will also be critically analysing what the role of the teacher is in the life-long learning sector, by analysing the professional and legislative frameworks, as well as looking for a deeper understanding into classroom management, and why it is used and how to manage different class room environments, such as managing inclusion and how it is supported throughout different methods.

Overall, I will be linking this to my own teaching, at A Mind Apart (A Mind Apart, 2019). Furthermore, I will have the ability to understand about interaction within the classroom and why communication between fellow teachers and students is important.

The role of the teacher is known for been the forefront of knowledge. Therefore, this suggest that the role of the teacher is to pass their knowledge on to their students, known as a ‘chalk and talk’ approach, although this approach is outdated and there are various ways we now teach in the classroom. Walker believes that, ‘the modern teacher is facilitator: a person who assists students to learn for themselves’ (Reece & Walker 2002) I for one cannot say I fully believe in this approach, as all students have individual learning needs, and some may need more help than others. As the teacher, it is important to know the full capability of your learners, therefore lessons can be structure to the learner’s need. It is important for the lessons to involve active learning and discussions, these will help keep the students engaged and motivated during class. Furthermore, it is important to not only know what you want the students the be learning, but it is just as important that you know as the teacher, what you are teaching; it is important to be prepared and be fully involved in your own lesson, before you go in to any class, as a teacher I make my students my priority, therefore, I leave any personal issues outside the door so I am able to give my students the best learning environment they could possibly have; not only is it important to do this but keep updated on your subject specialism, I always double check my knowledge of my subject regularly, I find following this structure my lesson will normally run at a smooth pace.

Taking in to consideration the students I teach are vulnerable there may be minor interruptions. It is not only important that you as the teacher leave your issues at the door, but to make sure the room is free from distractions, most young adults have a lot of situations which are they find hard to deal with, which means you as the teacher are not only there to educate but to make the environment safe and relaxing for your students to enjoy learning. As teachers we not only have the responsibility of making sure the teaching takes place, but we also have the responsibilities of exams, qualifications and Ofsted; and as a teacher in the life-long learning sector it is also vital that you evaluate not only your learner’s knowledge, but you evaluate yourself as a teacher, therefore, you are able to improve your teaching strategies and keep up to date.

When assessing yourself and your students it is important not to wait until the end of a term to do this and evaluate throughout the whole term. Small assessments are a good way of doing this, it doesn’t always have to be a paper examination, you can equally you can do a quiz, ask questions, use various fun games, or even use online games such as Kahoot to help your students regain their knowledge. This will not only help you as a teacher understand your students’ abilities, but it will also help your students know what they need to work on for next term.

Alongside the already listed roles and responsibilities of being a teacher in the life-long learning sector, Ann gravels explains that,

‘Your main role as a teacher should be to teach your students in a way that actively involves and engages your students during every session’ (Gravells, 2011, p.9.)

Gravels passion is solely based on helping new teachers, gain the knowledge and information they need to become successful in the lifelong learning sector. Gravels’ has achieved this by writing various text books on the lifelong learning sector. Gravels’ states in her book ‘Preparing to teach in the lifelong learning sector’, (Gravells, 2011) the importance of the 13 legislation acts. Although I find each of them equally important as each other, I am going to mention the ones I am most likely to use during my teacher training with A Mind Apart.

Safeguarding vulnerable groups act (2006) – Working with young vulnerable adults, I find this act is the one I am most likely to use during my time with A Mind Apart. In summary, the Act explains the following: ‘The ISA will make all decisions about who should be barred from working with children and vulnerable adults.’ (Southglos.gov.uk, 2019)
The Equality act (2010) – As I will be working with different sex, race and disabilities in any teaching job which I encounter, I believe The Equality act (2010) is fundamental to mention. The Equality act 2010 covers discrimination under one legalisation.
Code of professional practice (2008) – This act covers all aspects of the activities we as teachers in the lifelong learning sector may encounter. Based around seven behaviours which are: Professional practice, professional integrity, respect, reasonable care, criminal offence disclosure, and reasonability during institute investigations.

(Gravells, 2011)

Although, all acts are equally important, those are the few acts I would find myself using regularly. I have listed the others below:

Children act (2004)
Copyright designs and patents act (1988)
Data protection act (1998)
Education and skills act (2008)
Freedom of information act (2000)
Health and safety at work act (1974)
Human rights act (1998)
Protection of children act (POCA) (1999)
The Further education teachers’ qualification regulations (2007)

(Gravells, 2011)

Teaching theories are much more common in classrooms today, however there are three main teaching theories which us as teachers are known for using in the classroom daily. Experiments show that we find the following theories work the best: behaviourism, cognitive constructivist, and social constructivist, taking these theories into consideration I will look at comparing skinners behaviourist theory and taking a look at Maslow (1987) ‘Hierarchy Of Needs’ which was introduced in 1954, and how I could use these theories in my teaching as a drama teacher in the life-long learning sector.

Firstly, looking in to behaviourism is mostly described as the teacher questioning and the student responds the way you want them to. Behaviourism is a theory, which in a way can take control of how the student acts/behaves, if used to its full advantage. Keith Pritchard (Language and Learning, 2019) describes behaviourism as ‘A theory of learning focusing on observable behaviours and discounting any mental activity. Learning is defined simply as the acquisition of a new behaviour.’ (E-Learning and the Science of Instruction, 2019).

An example of how behaviourism works, is best demonstrated through the work of Ivan Pavlov (Encyclopaedia Britannica, 2019) Pavlov was a physiologist during the start of the twentieth century and used a method called ‘conditioning’, (Encyclopaedia Britannica, 2019) which is a lot like the behaviourism theory. During Pavlov’s experiment, he ‘conditioned’ the dogs to make them salivate when they heard a bell ring, as soon as the dogs hear the bell, they associate it with getting fed. As a result of this the dogs were behaving exactly how Pavlov wanted them to behave, therefore they had successfully been ‘conditioned’. (Encyclopaedia Britannica, 2019)

During Pavlov’s conditioning experiment there are four main stages in the process of classical conditioning, these include,

Acquisition, which is the initial learning;
Extinction, meaning the dogs in Pavlov’s experiment may not respond, if no food is presented to them;
Generalisation, after learning a response, the dog may now respond to other stimuli, with no further training. For example: if a child falls off a bike, a injures their self, they may be frightened to get back on to the bike again. And lastly,
Discrimination, which is the opposite of generalisation, for example the dog will not respond in the same way to another stimulus as they did the first one.

Pritchard states ‘It Involves reinforcing a behaviour by rewarding it’ which is what Pavlov’s dog experiment does. Although rewarding behaviour can be good, it can also be negative, such as bad behaviour can be discouraged by punishment. The key aspects of conditioning are as follows: Reinforcement, Positive reinforcement, Negative reinforcement, and shaping. (Encyclopaedia Britannica, 2019)

Behaviourism is one of the learning theories I use in my teaching today, working at A Mind Apart, (A Mind Apart, 2019) I work with challenging young people. The A Mind Apart organisation, a performing arts foundation especially targeted at vulnerable and challenging young people, to help better their lives; hence, on the off chance that I use the behaviourism theory it will admirably inspire the students to do better. Using behaviourism with respect to the standard of improvement and reaction, behaviourism is driven by the teacher and is responsible for how the student will carry on and how it is finished. This theory came around in the early twentieth century and concentrated how individuals behave; with respect to the work I do at A Mind Apart, as a trainee performing arts teacher, I can identify with behaviourism limitlessly, every Thursday, when my 2 hour class is finished, I at that point take 5 minutes out of my lesson to award a ‘Star of the week’ It is an incredible method to urge students to carry on the way they have been, if behaving and influence them to endeavour towards something ion the future. Furthermore, I have discovered that this theory can function admirably in any expert subject and not just performing arts. The behaviourism theory is straightforward as it depends just on detectable conduct and portrays a few widespread laws of conduct. It’s positive and negative support strategies can be extremely effective. The students who we teach in general at A Mind Apart, are destined to come to us with emotional well-being issues, which is the reason most of the time these students find that it is hard to focus, or even learn in a school environment; we are there to give a comprehensive learning environment and utilize the time we have with them, so they can move forward at their own pace and take a leap at their scholarly aptitudes and socialising in the future when they leave us, to move on to college or even jobs, our work with them will also help them meet new individuals, and gain new useful knowledge by using behaviourism teaching theory. Despite the fact some of them may struggle with obstacles during their lives; although it is not always easy to manipulate someone in to thinking or behaving the way you do or want them to, with time, and persistence I have found that this theory can work. It is known that…

‘Positive reinforcement or rewards can include verbal feedback such as ‘That’s great, you’ve produced that document without any errors’ or ‘You’re certainly getting on well with that task’ through to more tangible rewards such as a certificate at the end’… (Information List of topics Assessment Becoming a teacher Continuing Professional Development (CPD) Embedding maths et al., 2019)

Gagne (Mindtools.com, 2019) was an American educational psychologist best known for his nine levels of learning; Regarding Gagne’s nine levels of learning, (Mindtools.com, 2019) I have done something in depth research, in just a couple of his nine levels of learning therefore I will be able to understand the levels and how his theory link to behaviourism.

Create an attention-grabbing introduction.
Inform learner about the objectives.
Stimulate recall of prior knowledge.
Create goal-centred eLearning content.
Provide online guidance.
Practice makes perfect.
Offer timely feedback.
Assess early and often.
Enhance transfer of knowledge by tying it into real world situations and applications.

(Mindtools.com, 2019)

Informing the learner of the objectives, is the one level I can relate to the most during my lessons, I find it important in many ways why you as the teacher, should let your students know what they are going to be learning during that specific lesson. This will help them have a better understanding throughout the lesson, as even more engage them from the very start. Linking it to behaviourism during my lessons, I tell my students what I want from them that lesson, and what I expect them, with their individual needs, to be learning or have learnt by the end of lesson. If I believe learning has taking place during my lesson, I will reward them with a game of their choice at the end of the lesson. In their mind they understand they must do as they are asked by the teacher, or the reward to play a game at the end of lesson, will be forfeited. As studies show, during Pavlov’s (E-Learning and the Science of Instruction, 2019) dog experiment that this theory does work, it can take a lot of work. I have built a great relationship with my students, and most of the time they are willing to work to the best of their ability.

Although Skinners’ (E-Learning and the Science of Instruction, 2019) behaviourist theory is based around manipulation, Maslow’s ‘Hierarchy Of Needs’ (Very well Mind, 2019) believes that behaviour and the way people act is based upon childhood events, therefore it is not always easy to manipulate in to the way you think, as they may have had a completely different upbringing, which will determine how they act. Maslow (Very well Mind, 2019) feels, if you remove the obstacles that stop the person from achieving, then they will have a better chance to achieve their goals; Maslow argues that there are five different needs which must be met in order to achieve this. The highest level of needs is self-actualisation which means the person must take full reasonability for their self, Maslow believes that people can go through to the highest levels, if they are in an education which can produce growth. Below is the table of Maslow’s ‘Hierarchy of needs’ (Very well Mind, 2019)

(Information List of topics Assessment Becoming a teacher Continuing Professional Development (CPD) Embedding maths et al., 2019)

In an explanation the table lets you know your learners needs throughout different levels, during their time in your learning environment, all learners may be at different levels, but should be able to progress on to the next one when they feel comfortable to do so. There may be knockbacks which your learners as individuals will face, but is the needs that will motivate the learning, although you may find that not all learners want to progress through the levels of learning at that moment in time, for example, if your learner if happy with the progress they have achieved so far and are content with life, they may find they want to stay at that certain level.

It is important to use the levels to encourage your learners by working up the table.

Stage 1 of the table is the physiological need – are your learners comfortable in the environment you are providing, are they hungry or thirsty? Your learners may even be tired; taking all these factors in to consideration, these may stop learning taking place. Therefore, it is important to meet all your learners’ physiological needs.

Moving up the table to safety and security – make your learners feel safe in an environment where they can relax, feel at ease. Are your learners worried about anything in particular? If so, can you help them overcome their worries.

Recognition – do your learners feel like they are part of the group? It is important to help those who don’t feel that they are part of the group bond with others. Help your learners belong and make them feel welcome. One recognition is in place your learners will then start to build their self-esteem, are they learning something useful, although your subject specialism may be second to none, it is important that your passion and drive shines through your teaching; overall this will result in the highest level: Self actualisation, are your learners achieving what they want to do? Make the sessions interesting and your learners will remember more about the subject in question. (Very well Mind, 2019)

Furthermore, classroom management comes in to force with any learning theory you use whilst teaching. Classroom management is made up of various techniques and skills that we as teacher utilize. Most of today’s classroom management systems are highly effective as they increase student success. As I am now a trainee teacher, I understand that classroom management can be difficult at times, therefore I am always researching different methods on how to manage my class. Although I don’t believe entirely that this comes from just methods, but if your pupils respect you as a teacher, and they understand what you expect of them whilst in your class, you should be able to manage the class fine; relating this with my placement at A Mind Apart, my students know what I expect of them and from that my classroom management is normally good…following this there are a few classroom management techniques I tend to follow:

Demonstrating the behaviour, you want to see – eye contact whilst talking, phones away in bags/coats, listen when been spoken to and be respectful of each other, these are all good codes of conduct to follow, and they are my main rules whilst in the classroom.
Celebrating hard work or achievements – When I think a student has done well, we as a group will celebrate their achievement, whether It be in education or out, a celebration always helps with classroom management.
Make your session engaging and motivating – This is something all us trainee teachers find difficult within our first year, as I have found out personally from the first couple of months, I have learnt to get to know your learners, understand what they like to do, and what activity’s keep them engaged.
Build strong relationships – I believe having a good relationship with your students is one of the key factors to managing a class room. It is important to build trust with your students, make them feel safe and let them know they are in a friendly environment.

When it comes to been in a classroom environment, not all students will adhere to this, therefore they may require a difference kind of structure to feel included. A key example of this is students with physical disabilities, you may need to adjust the tables or even move them out the way, you could also adjust the seating so a student may be able to see more clearly if they have hearing problems maybe write more down on the board, or even give them a sheet at the start of the lesson, which lets them know what you will be discussing and any further information they may need to know, not only do you need to take physical disabilities in to consideration but it is also important to cater for those who have behavioural problems, it is important to adjust the space to make your students feel safe whilst in your lesson.

Managing your class also means that sometimes you may have to adjust your teaching methods to suit all in your class and understand that it is important to incorporate cultural values. Whilst in the classroom, or even giving out home work you may need to take in to consideration that some students, especially those with learning difficulties, may take longer to do work, or even need additional help.

Conclusion

Research has given me a new insight into how many learning theories, teaching strategies and classroom management strategies there are, there are books and websites which help you achieve all the things you need to be able to do in your classroom. Looking back over this essay I looked in to the two learning theories that I am most likely to use.

2019-1-7-1546860682

Synchronous and asynchronous remote learning during the Covid-19 pandemic

Student’s Motivation and Engagement

Motivation plays an important role in student engagement. Saeed and Zyngier (2012) contend that in order to assess student motivation, researchers should also have to examine engagement in and as part of learning. This manifests that there is a relationship between student motivation and engagement. As support to this relationship, Hufton, Elliot, and Illushin (2002) believe that high levels of engagement show high levels of motivation. In other words, when the levels of motivation of students are high that is when their levels of engagement are also high.

Moreover, Dörnyei (2020) suggests that the concept of motivation is closely associated with engagement, and with this he asserted that motivation must be ensured in order to achieve student engagement. He further offered that any instructional design should aim to keep students engaged, regardless of the learning context, may it be traditional or e-learning. In addition, Lewis et al (2014) reveal that within the online educational environment, students can be motivated by delivering an engaging student-centered experience consistently.

In the context of Student-Teacher Dialectical Framework embedded with Self-Determination Theory, Reeve, J. (2012) reveal three newly discovered functions of student engagement. First, is that engagement bridges students’ motivation to highly valued outcomes. Second, is that student engagement affects the future quality of learning environment especially in the flow of instruction, the external events it has, and the teacher’s motivating style. Third, is that student engagement changes motivation, which means that engagement cause changes in motivation in the future. This highlights that student motivation is both a cause and a consequence. This assertion that engagement can cause changes motivation is embedded on the idea that students can take actions to meet their own psychological needs and enhance the quality of their motivation. Further, Reeve, J. (2012) asserts that students can be and are architects of their own motivation, at least to the extent that they can be architects of their own course-related behavioral, emotional, cognitive, and agentic engagement.

Synchronous and Asynchronous Learning

The COVID-19 pandemic brought a great disaster on the education system around the world. Schools have struggled due to the situation in which led them to cessation of classes for an extended period of time and other restrictive measures that later on impede the continuance of face-to face classes. In consequence, there is a massive change towards the educational system around the world while educational institutions strive and put their best efforts to resolve the situation. Many schools had addressed the risks and challenges in continuing education amidst the crisis by shifting conventional or traditional learning into distance learning. Distance learning is a form of education through the support of technology that is conducted beyond physical space and time (Papadopulou, 2020). Distance learning is an online education that provides opportunities towards educational advancement and learning development among learners worldwide. In order to sustain the educational goal of our country, distance learning is a new way of providing quality education as much as possible among public and private institutions especially to those pursing in higher education. The instructional delivery in considering distance education can be through synchronous or asynchronous mode of learning, in which students can engage and continually attain quality education despite of the pandemic situation.

Based on the definition of Easy LMS Company (2020), synchronous learning refers to a learning event in which a group of participants is engaged in learning at the same time (e.g., zoom meeting, web conference, real- time class) while asynchronous learning refers to the opposite, in which the instructor, the learner, and other participants are not engaged in the learning process at the same time. Thus, there is no real-time interaction with other people (e.g., pre-recorded discussions, self- paced learning, discussion boards). According to article issued by University of Waterloo (2020), synchronous learning is a form of learning that is live presentation which allows the students to ask questions while asynchronous can be a recorded presentation that allows students to have time in reflecting before asking questions. Synchronous learning is a typical meeting of students in a virtual setting and there is a class discussion where everybody can participate actively. Asynchronous learning is the utilization of learning platform or portal where the teachers or instructors can post and update lessons or activities and student can work at their own pace. These types of class instruction are commonly observed in these times and students have their own preferences when it comes to what best works for them.

In comparing both of the types of learning, it is valuable to know the advantages and disadvantages in order to see how it will really be an impact towards students. Wintemute (2021) discussed synchronous learning has greater engagement and direct communication is present, but it requires strong internet connection. On the other hand, asynchronous learning is advantageous in schedule flexibility and more accessible, yet it is less immersive and the challenges in procrastination, socialization and distraction are present. Students in synchronous learning tend to adapt the changes of learning with classmates in a virtual setting while asynchronous learning introduced a new setting where students can choose when to study.

In the middle of the crisis, asynchronous learning can be more favorable than synchronous because most of us are struggling in this pandemic. One of the principal advantages of asynchronous online learning is that it offers more flexibility, allowing learners to set their own schedule and work at their own pace (Anthony and Thomas, 2020). In contrast, synchronous learning allows students to feel connected in a virtual world and it can give them assurance of not being isolated amidst studying lessons because they can have a live interactions and exchange of ideas and other valuable inputs for the class to understand the lessons well by the help of teachers. The main advantages of synchronous learning are that instructors can explain specific concepts when students are struggling and students can also get immediate answers about their concerns in the process of learning (Hughes, 2014). In the article of Delgado (2020), the advantages and disadvantages will not be effective if they do not have a pedagogical methodology considering the technology and its optimization. Furthermore, the quality of learning depends on good planning and design by reviewing and evaluating each type of learning modality.

Synthesis

Motivating students has been a key challenge facing instructors in the contexts of online learning (Zhao et. al 2016). In which motivation is one of the bases of the student to do well in their studies. When students are motivated, the outcome is a good mark. In short, motivation is a way to pushed them study more to get high grades. According to Zhao (2016) motivation in an online learning environment revealed that there are learning motivation differences among students from different cultural backgrounds. Motivation is described as “the degree of people’s choices and the degree of effort they will put forth” (Keller, 1983). Learning is closely linked to motivation because it is an active process that necessitates intentional and deliberate effort. Educators must build a learning atmosphere in which students are highly encouraged to participate both actively and productively in learning activities if they want to get the most out of school (Stipek, 2002). John Keller (1987) in his study revealed that attention and motivation will not be maintained unless the learner believes the teaching and learning are relevant. According to Zhao (2016), a strong interest in a topic will lead to mastery goals and intrinsic motivation.

Engagement can be perceived with the interaction between students and teachers in online classes. Student engagement, according to Fredericks et al. (2004), is a meta-construct that includes behavioral, affective, and mental involvement. Despite the fact that there is a broad body of literature on behavioral (i.e., time on task), emotional (i.e., interest and value), and cognitive engagement (i.e., self-regulation and learning strategies), what sets engagement apart is its capacity as a multifaceted strategy. While there is substantial research on behavioral (i.e., time on task), emotional (i.e., interest and value), and cognitive engagement (i.e., self-regulation and learning strategies what distinguishes engagement is its ability as a multidimensional or “meta”-construct that encompasses all three dimensions.

Motivation plays an important role in student engagement. Saeed and Zyngier (2012) contend that in order to assess student motivation, researchers should also have to examine engagement in and as part of learning.

Lewis et al (2014) reveal that within the online educational environment, students can be motivated by delivering an engaging student-centered experience consistently.

In the context of Student-Teacher Dialectical Framework embedded with Self-Determination Theory, Reeve, J. (2012) reveal three newly discovered functions of student engagement. First, is that engagement bridges students’ motivation to highly valued outcomes. Second, is that student engagement affects the future quality of learning environment especially in the flow of instruction, the external events it has, and the teacher’s motivating style. Third, is that student engagement changes motivation, which means that engagement cause changes in motivation in the future. Distance learning is an online education that provides opportunities towards educational advancement and learning development among learners worldwide. In order to sustain the educational goal of our country, distance learning is a new way of providing quality education as much as possible among public and private institutions especially to those pursing in higher education. The instructional delivery in considering distance education can be through synchronous or asynchronous mode of learning, in which students can engage and continually attain quality education despite of the pandemic situation.

According to article issued by University of Waterloo (2020), synchronous learning is a form of learning that is live presentation which allows the students to ask questions while asynchronous can be a recorded presentation that allows students to have time in reflecting before asking questions. Synchronous learning is a typical meeting of students in a virtual setting and there is a class discussion where everybody can participate actively. Asynchronous learning is the utilization of learning platform or portal where the teachers or instructors can post and update lessons or activities and student can work at their own pace. These types of class instruction are commonly observed in these times and students have their own preferences when it comes to what best works for them.

In comparing both of the types of learning, it is valuable to know the advantages and disadvantages in order to see how it will really be an impact towards students. Wintemute (2021) discussed synchronous learning has greater engagement and direct communication is present, but it requires strong internet connection. On the other hand, asynchronous learning is advantageous in schedule flexibility and more accessible, yet it is less immersive and the challenges in procrastination, socialization and distraction are present.

In the middle of the crisis, asynchronous learning can be more favorable than synchronous because most of us are struggling in this pandemic. One of the principal advantages of asynchronous online learning is that it offers more flexibility, allowing learners to set their own schedule and work at their own pace (Anthony and Thomas, 2020). In contrast, synchronous learning allows students to feel connected in a virtual world and it can give them assurance of not being isolated amidst studying lessons because they can have a live interactions and exchange of ideas and other valuable inputs for the class to understand the lessons well by the help of teachers.

2022-1-8-1641647078

‘Peak Oil’ – what are the solutions?

The ability to harness energy sources and put them towards a productive use has played a crucial role in economic development worldwide. Easily accessible oil helped to fuel continued expansion in the 20th century. Agricultural production was transformed by motorised farm equipment and petroleum-based fertilisers and pesticides. Cars, trucks and airplanes powered by oil products revolutionised the transportation of people and goods. Oil provides fuel for home heating, electricity production, and to power industrial and agricultural equipment. It also provides the source material for the construction of plastics, many fertilisers and pesticides and many industrial chemicals and materials. It is now difficult to find any product that does not require the use of oil at some point in the production process.

Oil has several advantages over other fossil fuels: it is easily transportable and energy-dense, and when refined it is suitable for a wide variety of uses. Considering the important role that oil plays in our economy, if persistent shortages were to emerge, the economic implications could be enormous. However, there is no consensus as to how seriously the treat of oil resources depletion should be taken. Some warn of a colossal societal collapse in the not-too-distant future, while others argue that technological progress will allow us to shift away from oil before resource depletion becomes an issue.

How much of a problem oil depletion poses depends on the amount of oil that remains accessible at reasonable cost, and how quickly the development of alternatives allows the demand for oil to be reduced. This is what the term ‘peak oil’ means the point of when the demand for oil outstrips the availability. Demand and supply each evolve over time following a pattern that is based in historical data, while supply is also constrained by resource availability. There is no mechanism for market on its own to address concerns about climate change. However, if policies are put in place to price the costs of climate change into the price of fossil fuel consumption, then this should trigger market incentives that should lead efficiently to the desired emission reductions.

A while ago the media was filed with stories about peak oil and it was even in an episode of the Simpsons. Peak oil in basic term means that the point we have used all the easy to extract oil and are only left with the hard to reach which in term is expensive to refine. There is still a huge amount of debate amongst geologist and Petro- industries experts about how much oil is left in the ground. However, since then the idea of a near-term peak in the world oil supplies has been discredited. The term that is now used is Peak Oil demand, the idea is that because of the proliferation of electric cars and other sources of energy means that demand for oil will reach a maximum and start to decline and indeed consumptions levels in some parts of the world have already begun to stagnate.

The other theory that has been produce is that with supply beginning to exceed demand there is not enough investment going into future oil exploration and development. Without this investment production will decline but production is not declining due to supply problems just that we are moving into an age of oil abundance and the decline in oil production seen if because of other factors. There has been an explosion of popular literature recently predicting that oil production will peak soon, and that oil shortages will force us into major lifestyle changes in the near future- a good example of this is Heinberg (2003). The point at which oil production reaches a peak and begins to decline permanently has been referred to as ‘Peak Oil’. Predictions for when this will occur range from 2007 and 2025 (Hirsch 2005)

The Hirsch Report of 2005 concluded that it would take a modern industrial nation such as the UK or the United States at least a full decade to prepare for peak oil. Since 2005 there has been some movement towards solar and wind power together with more electric cars but nothing that deals with the scale of the problem. This has been compounded by Trump coming to power in the United States and deciding to throw the energy transition into reverse, discouraging alternative energy and expanding subsidies for fossil fuels.

What is happening how

Many factors are reported in news reports to cause changes in oil prices: supply disruptions from wars and other political factors, from hurricanes or from other random events; changes in demand expectations based on economic reports, financial market events or even weather in areas where heating oil is used; changes in the value of the dollar; reports of inventory levels, etc. these are all factors that will affect the supply and demand for oil, but they often influence the price of oil before they have any direct impact on the current supply or demand for crude oil. Last year, the main forces pushing the oil market higher were the agreement by OPEC and its partners to lower production and the growth of global demand. This year, an array of factors are pressuring the oil markets: the US sanctions that threaten to cut Iranian oil production from Venezuela. Moreover, there are supply disruptions in Libya, the Canadian tar sands, Norway and Nigeria that add to the uncertainties as does erratic policymaking in Washington, complete with threats to sell off part of the US strategic reserve and a weaker dollar. Goldman Sachs continues to expect that Brent Crude prices could retest $80 a barrel this year, but probably only late in 2018. “production disruptions and large supply shifts driven by US political decisions are the drivers of this new volatility, with demand remaining robust so far” Brent Crude is expected to trade in the $70-$80 a barrel range in the immediate future.

The OPEC

Saudi Arabia-and Russia-had started to raise production even before the 22 June 2018 meeting with OPEC that sought to address the shrinking global oil supply and rising prices. OPEC had over-complying with the cuts agreed to at the November 2016 meeting thanks to additional cuts from Saudi Arabia and Venezuela. The June 2018 22nd meeting decided to increase production to more closely reflect the production cut agreement. After the meeting, Saudi Arabia pledged a “measurable” supply boost but gave no specific numbers. Tehran’s oil minister warned his Saudi Arabian counterpart that the June 22nd revision to the OPEC supply pact do not give member countries the right to raise oil production above their targets. The Saudis, Russia and several of the Gulf Arab States increased production in June but seem reluctant to expand much further. During the summer months, the Saudis always need to burn more raw crude in their power station to combat the very high temperatures of their summer.

US Shale oil production

According to the EIA’s latest drilling productivity Report, US unconventional oil production is projected to rise by 143,000 b/d in August to 7.470 billion b/d. The Permian Basin is seen as far outdistancing other shale basins in monthly growth in August, at 73,000 b/d to 3,406 million b/d. However, drilled but uncompleted (DUC) wells in the Permian rose 164 in June to 3,368, one of the largest builds in recent months. Total US DUCs rose by 193 to 7,943 in June. US energy companies last week cut oil rigs the most in a week since March as the rate of growth had slowed over the past month or so with recent declines in crude prices. Included with other optimistic forecast for US shale oil was the caveat that the DUC production figures are sketchy as current information is difficult for the EIA to obtain with little specific data being provided to Washington by E&Ps or midstream operators. Given all the publicity surrounding constraints on moving oil from the Permian to market, the EIA admits that it “may overestimate production due to constraints.”

The Middle East and North Africa

Iran

Iran’s supreme leader, Ayatollah Ali Khamenei, called on state bodies to support the government of president Hassan Rouhani in fighting US economic sanctions. The likely return of US economic sanctions has triggered a rapid fall of Iran’s currency and protests by bazaar traders usually loyal Islamist rulers, and a public outcry over alleged price gouging and profiteering. The speech to member of Rouhani’s cabinet is clearly aimed at the conservative elements in the government who have been critical of the President and his policies of cooperation with the West and a call for unity in a time that seems likely to be one of great economic hardship spread to more than 80 Iranian cities and towns. At least 25 people died in the unrest, the most significant expression of public corruption, but the protest took on a rare political dimension, with growing number of people calling on supreme leader Khamenei to step down. Although there is much debate over the effectiveness of the impending US sanctions, some analysts are saying that Iran’s oil exports could fall by as much as two-thirds by the end of the year putting oil markets under massive strain amid supply outages elsewhere in the world. Some of the worst-case scenarios are forecasting a drop to only 700,000 b/d with most of Tehran’s exports going to China, and smaller chares going to India, Turkey and other buyers with waivers. China, the biggest importer of Iranian oil at 650,000 b/d according to Reuters trade flow data, is likely to ignore US sanctions.

Iraq

Iraq’s future is again in trouble as protests erupt across the country. These protests began in southern Iraq after the government was accused of doing nothing to alleviate a deepening unemployment crisis, water and electricity shortages and rampant corruption. The demonstrations are spreading to major population centers including Najaf and Amirah, and now discontent is stirring in Baghdad. The government has been quick to promise more funding and investment in the development of chronically underdeveloped cities, but this has done little to quell public anger. Iraqis have heard these promises countless times before, and with a water and energy crisis striking in the middle of scorching summer heat, people are less inclined to believe what their government says. The civil unrest had begun to diminish in southern Iraq, leaving the country’s oil sector shaken but secure-though protesters have vowed to return. Operations at several oil fields have been affected as international oil companies and service companies have temporality withdrawn staff from some areas that saw protests. The government claims that the production and exporting oil has remained steady during the protests. With Iran refusing to provide for Iraq’s electricity needs, Baghdad has now also turned to Saudi Arabia to see if its southern Arab neighbor can help alleviate the crises it faces.

Saudi Arabia

The IPO has been touted for the past two years as the centerpiece of an ambitious economic reform program driven by crown prince Mohammed bin Salman to diversify the Saudi economy beyond oil. Saudi Arabia expects its crude exports to drop by roughly 100,000 b/d in August as the kingdom tries to ensure it does not push oil into the market beyond its customers’ needs.

Libya

Reopened its eastern oil ports and started to ramp up production from 650,000 to 700,000 and is expected to rise further after shipments resume at eastern ports that re-opened after a political standoff.

China

China’s economy expanded by 6.7 percent its slowest pace since 2016. The pace of annual expansion announced is still above the government’s target of “about 6.5 percent” growth for the year, but the slowdown comes as Beijing’s trade war with the US adds to headwinds from slowing domestic demand. The gross domestic product had grown at 6.8 percent in the previous three quarters. Higher oil prices play a role in the slowing of demand, but the main factor is higher taxes on independent Chinese refiners, which is already cutting into the refining margins and profits of the ‘teapots’ who have grown over the past three years to account fir around fifth of China’s total crude imports. Under the stricter tax regulations and reporting mechanisms effective 1 March, however, the teapots now can’t avoid paying a consumption tax on refined oil products sales- as they did in the past three years- and their refining operations are becoming less profitable.

Russia

Russia oil production rose by around 100,000 b/d from May. From July 1-15 the country’s average oil output was 11.215 million b/d an increase of 245,000 b/d from May’s production. Amid growing speculation that President Trump will attempt to weaken US sanctions on Russia’s oil sector, US congressional leaders are pushing legislation to strengthen sanctions on Russian export pipelines and joint ventures with Russian oil and natural gas companies. Ukraine and Russia said they would hold further European Union-mediated talks on supplying Europe with Russian gas, in a key first step towards renewing Ukraine’s gas transit contract that expires at the end of next year.

Venezuela

Venezuela’s Oil Minister Manuel Quevedo has been talking about plans to raise the country’s crude oil production in the second half of the year. However, no one else thinks or claims that Venezuela could soon reverse its steep production decline which has seen it losing more than 40,000 b/d of oil production every month for several months now. According to OPEC’s secondary sources in the latest Monthly Oil Market Report, Venezuela’s crude oil production dropped in June by 47,500 b/d from May, to average 1.340 million b/d in June. During a collapsing regime, widespread hunger, and medical shortages, President Nicolas Maduro continues to grant generous oil subsidies to Cuba. It is believed that Venezuela continues to supply Cuba with around 55,000 barrels of oil per day, costing the nation around $1.2 billion per year.

Alternatives to Oil

In its search for secure, sustainable and affordable supplies of energy, the world is turning its attention to unconventional energy resources. Shale gas is one of them. It has turned upside down the North-American gas markets and is making significant strides in other regions. The emergence of shale gas as a potentially major energy source can have serious strategic implications for geopolitics and the energy industry.

Uranium and Nuclear

The nuclear industry has a relatively short history: the first nuclear reactor was commissioned in 2945. Uranium is the main source of fuel for nuclear reactors. Worldwide output of uranium has recently been on the rise after a long period of declining production caused by uranium resources have grown by 12.5% since 2008 and they are sufficient for over 100 years of supply based on current requirements.

Total nuclear electricity production has been growing during the past two decades and reached an annual output of about 2,600TWh by mid-2000s, although the three major nuclear accidents have slowed down or even reversed its growth in some countries. The nuclear share of total global electricity production reached its peak of 17% by the late 1980s, but since then it has been falling and dropped to 13.5% in 2012. In absolute terms, the nuclear output remains broadly at the same level as before, but its relative share in power generation has decreased, mainly due to Fukushima nuclear accident.

Japan used to be one of the countries with high share of nuclear (30%) in its electricity mix and high production volumes. Today, Japan has only two of its 54 reactors in operation. The rising costs of nuclear installations and lengthy approval times required for new construction have had an impact on the nuclear industry. The slowdown has not been global, as new countries, primarily in the rapidly developing economies in the Middle East and Asia, are going ahead with their plans to establish a nuclear industry.

Hydro Power

Hydro power provides a significant amount of energy throughout the world and is present in more than 100 countries, contributing approximately 15% of the global electricity production. The top 5 largest markets for hydro power in terms of capacity are Brazil, Canada, China, Russia and the United States of America. China significantly exceeds the other, representing 24% of global installed capacity. In several other countries, hydro power accounts for over 50% of all electricity generation, including Iceland, Nepal and Mozambique for example. During 2012, an estimated 27-30GW of new hydro power and 2-3GW of pumped storage capacity was commissioned.

In many cases, the growth in hydro power was facilitated by the lavish renewable energy support policies and CO2 penalties. Over the past two decade the total global installed hydro power capacity has increased by 55%, while the actual generation by 21%. Since the last survey, the global installed hydro power capacity has increased by 8%, but the total electricity produced dropped by 14%, mainly due to water shortages.

Solar PV

Solar energy is the most abundant energy resource and it is available for use in its direct (solar radiation) and indirect (wind, biomass, hydro, ocean etc.) forms. About 60% of the total energy emitted by the sun reaches the Earth’s surface. Even if only 0.1% of this energy could be converted at an efficiency of 10%, it would be four times larger than the total world’s electricity generating capacity of about 5,000GW. The statistics about solar PV installations are patchy and inconsistent. The table below presents the values for 2011 but comparable values for 1993 are not available.

The use of solar energy is growing strongly around the world, in part due to the rapidly declining solar panel manufacturing costs. For instance, between 2008-2011 PV capacity has increased in the USA from 1,168MW to 5,171MW, and in Germany from 5,877MW to 25,039MW. The anticipated changes in national and regional legislation regarding support for renewables is likely to moderate this growth.

Conclusion

The rapid consumption of fossil fuels has contributed to environmental damage, the use of these fuels including oil releases chemicals that contribute to smog, acid rain, mercury contamination and carbon dioxide emissions from fossil fuel consumption are the main drivers of climate change, the effects of which are likely to become more and more severe as temperature rise. The depletion of oil and other fossil resources leaves less available to future generations and increases the likelihood of price spikes if demand outpaces supply.

One of the most intriguing conclusions from this idea is that this new “age of abundance” could alter behavior from oil producers. In the past some countries (notably OPEC members) restrained output husbanding resources for the future, betting that scarcity would increase the value of their holdings over time. However, if a peak in demand looms just over the horizon, oil producers could rush to maximize their production in order to get as much value for their reserves while they can. Saudi oil minister Sheikh Ahmed Zaki Yamani was famously quoted as saying, “the Stone Age didn’t end for lack of stone, and the oil age will end long before the world runs out of oil.” This quote reflects the view that the development of new technologies will lead to a shift away from oil consumption before oil resources are fully depleted. Nine of the ten recessions between 1946 and 2005 were preceded by spikes in oil prices and the latest recession followed the same pattern.

Extending the life of oil fields, let alone investing in new ones, will require large volumes of capital, but that might be met with skepticism from wary investors when demand begins to peak. It will be difficult to attract investment to a shrinking industry, particularly if margins continued to get squeezed. Peak demand should be an alarming prospect for OPEC, Russia and the other major oil producing countries. Basically, any and all oil producers who will find themselves fighting more aggressively for a shrinking market.

The precise data at which oil demand hits a high point and then enters into decline has been the subject of much debate, and a topic that has attracted a lot of interest just in the last few years. Consumption levels in some parts of the world have already begun to stagnate, and more and more automakers have begun to ratchet up their plans for electric vehicles. But the exact date the world will hit peak demand misses the whole point. The focus shouldn’t be on the date at which oil demand peaks, but rather the fact that the peak is coming. In other words, oil will be less important when it comes to fueling the global transportation system, which will have far-reaching consequences for oil producers and consumers alike. The implications of a looming peak in oil consumptions are massive. Without an economic transformation, or at least serious diversification, oil-producing nations that depend on oil revenues for both economic growth and to finance public spending, face an uncertain future.

2018-9-21-1537537682

Water purification and addition of nutrients as disaster relief: college application essay help

1. Introduction

1.1 Natural Disasters

Natural disasters are naturally occurring events that threaten human lives and causes damage to property. Examples of natural disasters include hurricanes, tsunamis, earthquakes, volcanic eruptions, typhoons, droughts, tropical cyclones and floods. (Pask, R., et al (2013)). They are inevitable and oftentimes, can cause calamitous implications such as water contamination and malnutrition, especially to developing countries like the Philippines, which is particularly prone to typhoons and earthquakes. (Figure 1)

Figure 1 The global distribution of natural disaster risk (The United Nations University World Risk Index 2014)

1.1.1 Impacts of Natural Disaster

The globe faces impacts of natural disasters on human lives and economy on an astronomical scale. According to a 2014 report by the United Nations, since 1994, 4.4 billion people have been affected by disasters, which claimed 1.3 million lives and cost US$2 trillion in economic losses. Developing countries are more likely to suffer a greater impact from natural disasters than developed countries as natural disasters affect the number of people living below the poverty line, and increase their numbers by more than 50 percent in some cases. Moreover, it is expected that by 2030, up to 325 million extremely poor people will live in the 49 most hazard-prone countries. (Child Fund International. (2013, June 2)) Hence, it necessitates the need for disaster relief to save the lives of those affected, especially those in developing countries such as the Philippines.

1.1.2 Lack of access to clean water

After a natural disaster strikes, severe implications such as water contamination occurs.

Besides, natural disasters know no national borders of socioeconomic status. (Malam, 2012) For example, Hurricane Katrina, which struck New Orleans, a developed city, destroyed 1,200 water systems, and 50% of existing treatment plants needed rebuilding afterwards. (Copeland, 2005) This led to the citizens of New Orleans having a shortage of drinking water. Furthermore, after the 7.0 magnitude earthquake that struck Haiti, a developing country, in 2012, there was no plumbing left underneath Port-Au-Prince, and many of the water tanks and toilets were destroyed. (Valcárcel, 2010) These are just some of the many scenarios of can bring about water scarcity.

The lack of preparedness to prevent the destruction caused by the natural disaster and the lack of readiness to respond claims to be the two major reasons for the catastrophic results of natural disasters. (Malam, 2012) Hence, the aftermath of destroyed water systems and a lack of water affect all geographical locations regardless of its socioeconomic status.

1.2 Disaster relief

Disaster relief organisations such as The American Red Cross help countries that are recovering from natural disasters by providing these countries with the basic necessities.

After a disaster, the Red Cross works with community partners to provide hot meals, snacks and water to shelters or from Red Cross emergency response vehicles in affected neighborhoods. (Disaster Relief Services | Disaster Assistance | Red Cross.)

The International Committee of the Red Cross/Red Crescent (ICRC) reported that its staff had set up mobile water treatment units. These were used to distribute water to around 28,000 people in towns along the southern and eastern coasts of the island of Samar, and to other badly-hit areas including Basey, Marabut and Guiuan. (Pardon Our Interruption. (n.d.))

Figure 2: Children seeking help after a disaster(Pardon Our Interruption. (n.d.))

Figure 3: Massive Coastal Destruction from Typhoon Haiyan (Pardon Our Interruption. (n.d.))

1.3 Target audience: Tacloban, Leyte, The Philippines

As seen in figures 4 and 5, Tacloban is the provincial capital of Leyte, a province in the Visayas region in the Philippines. It is the most populated region in the Eastern Visayas region, with a total population of 242,089 people as of August 2015. (Census of Population, 2015)

Figure 4: Location of Tacloban in the Philippines (Google Maps)

Figure 5: Location of Tacloban in the Eastern Visayas region (Google Maps)

Due to its location on the Pacific Ring of Fire (Figure 6), more than 20 typhoons (Lowe, 2016) occur in the Philippines each year.

Figure 6: The Philippines’ position on the Pacific Ring of Fire (Mindoro Resources Ltd., 2004)

In 2013, Tacloban was struck by Super Typhoon Haiyan, locally known as ‘Yolanda’. The Philippine Star, a local digital news organisation, reported more than 30,000 deaths from that disaster alone. (Avila, 2014) Tacloban is in shambles after Typhoon Haiyan and requires much aid to restore the affected area, especially when the death toll is a whopping five figure amount.

1.4 Existing measures and their gaps

Initially, there was a slow response of the government to the disaster. For the first three days after the typhoon hit, there was no running water and dead bodies were found in wells. In desperation for water to drink, some even smashed pipes of the Leyte Metropolitan Water District. However, even when drinking water was restored, it was contaminated with coliform. Many people thus became ill and one baby died of diarrhoea. (Dizon, 2014)

Long response-time by the government, (Gap 1) and further consequences were borne by the restoration of water brought (Gap 2). The productivity of people was affected and hence there is an urgent need for a better solution to the problem of late restoration of clean water.

1.5 Reasons for Choice of Topic

There is high severity since ingestion of contaminated water is the leading cause of infant mortality and illness in children (International Action, n.d.) and more than 50% of the population is undernourished. (World Food Programme, 2016). Much support and humanitarian aid has been given by organisations such as World Food Programme and The Water Project, yet more efforts are needed to lower the death rates, thus showing the persistency. It is also an urgent issue as malnourishment mostly leads to death and the children’s lives are threatened.

Furthermore, 8 out of 10 of the world’s cities most at risk to natural disasters are in the Philippines. (Reference to Figure _)Thus, the magnitude is huge as there is high frequency of natural disasters. While people are still recovering from the previous one, another hit them, thus worsening the already severe situation.

Figure _ Top 5 Countries of World Risk Index of Natural Disasters 2016 (Source: UN)

WWF CEO Jose Maria Lorenzo Tan said that “on-site desalination or purification” would be a cheaper and better solution to the lack of water than shipping in bottled water for a long period of time. (Dizon, 2014) Instead of relying on external humanitarian aid, which might incur a higher amount of debt as to relying on oneself for water, this can cushion the high expenses of rebuilding their country. Hence, there is a need for a water purification plant that provides potable water immediately when a natural disaster strikes. The plant will also have to provide cheap and affordable water until water systems are restored back to normal.

Living and growing up in Singapore, we have never experienced natural disasters first hand. We can only imagine the catastrophic destruction and suffering that accompanies natural disasters. With “Epione Solar Still” (named after the greek goddess of the Soothing of Pain), we hope to be able to help many Filipinos access clean and drinkable water, especially children who clearly do not deserve to experience such tragedy and suffering.

1.6 Case study: Disaster relief in Japan

Located at the Pacific Ring of Fire, Japan is vulnerable to natural disasters such as earthquakes, tsunami, volcanic eruptions, typhoons, floods and mudslides due to its geographical location and natural conditions. (Japan Times, 2016)

In 2011, an extremely high 9.0 magnitude earthquake hit Fukushima, causing a tsunami that destroyed the northeast coast and killed 19,000 people. It was the worst-hit earthquake in Japan in history, and it damaged the Fukushima plant and caused nuclear leakage, leading to contaminated water which currently exceeds 760,000 tonnes. (The Telegraph, 2016) The earthquake and tsunami caused a nuclear power plant to fail, and radiation to leak into the ocean and escape into the atmosphere. Many evacuees have still not returned to their homes, and, as of January 2014, the Fukushima nuclear plant still poses a threat, according to status reports by the International Atomic Energy Agency. (Natural Disasters & Pollution | Education – Seattle PI. (n.d.))

Disaster Relief

In the case of major disasters, the Japan International Cooperation Agency (JICA) deploys Japan Disaster Relief (JDR) teams, consisting of the rescue, medical, expert and infectious disease response teams and also the Self-Defence Force (SDF) to provide relief aid to affected countries. It provides emergency relief supplies such as blankets, tents and water purifiers and some are also stockpiled as reserved supplies in places closer to disastrous areas in case disasters strike there and emergency disaster relief is needed. (JICA)

For example during the Kumamoto earthquake in 2016, 1,600 soldiers had joined the relief and rescue efforts. Troops were delivering blankets and adult diapers to those in shelters. With water service cut off in some areas, residents were hauling water from local offices to their homes to flush toilets. (Japan hit by 7.3-magnitude earthquake | World news | The Guardian. (2016, April 16))

Solution to Fukushima water contamination

Facilities are used to treat contaminated water. The main one is the Multi-nuclide Removal Facility (ALPS) (Figure _), which could remove most radioactive materials except Tritium. (TEPCO, n.d)

Figure _: Structure of Multi-nuclide Removal Facility (ALPS) (TEPCO, n.d)

1.7 Impacts of Case Study

The treatment of contaminated water is very effective as more than 80% of contaminated water stored in tanks has been decontaminated and more than 90% of radioactive materials has been removed during the process of decontamination by April 2015. (METI, 2014)

1.8 Lessons Learnt

Destruction caused by natural disasters results in a lack of access to clean and drinkable water (L1)

Advancements in water purification technology can help provide potable water for the masses. (L2)

Natural disasters weaken immune systems, people are more vulnerable to the diseases (L3)

1.9 Source of inspiration

Suny Clean Water’s solar still, is made with cheap material alternatives, which would help to provide more affordable water for underprivileged countries.

A fibre-rich paper is coated with carbon black(a cheap powder left over after the incomplete combustion of oil or tar) and layered over each section of a block of polystyrene foam which is cut into 25 equal sections. The foam floats on the untreated water, acting as an insulating barrier to prevent sunlight from heating up too much of the water below. Then, the paper wicks water upward, wetting the entire top surface of each section. This causes a clear acrylic housing to sit atop the styrofoam. (Figure _)

Figure _: How fibre-rich paper coated with carbon black is adapted into the solar still. (Sunlight-powered purifier could clean water for the impoverished | Science | AAAS. (2017, February 2)

It is estimated that the materials needed to build it cost roughly $1.60 per square meter, compared with $200 per square meter for commercially available systems that rely on expensive lenses to concentrate the sun’s rays to expedite evaporation.

1.10 Application of Lessons Learnt

Gaps in current measures

Learning points

Applications to project

Key features in proposal

Developing countries lack the technology / resources to treat their water and provide basic necessities to their people.

Advanced technology can provide potable water readily. (L2)

Need for technology to purify contaminated water.

Solar Distillation Plant

Even with purification of water, problem of malnutrition which is worsened by natural disasters, is still unsolved.

Solution to provide vitamins to young children to boost immunity and lower vulnerability to diseases and illnesses. (L3)

Need for nutrient-rich water.

Nutrients infused into water using concept of osmosis.

Even with the help of external organisations, less than 50% of households have access to safe water.

Clean water is still inaccessible to some people. (L1)

Increase accessibility to water.

Evaporate seawater (abundant around Phillipines) in solar still. (short-term solution)

Figure _: Table of application of lessons learnt

2. Project Aim and Objectives

2.1 Aim

Taking into account the loopholes that exist in current measures adopted to improve water purification to reduce water pollution and malnutrition in Ilocos Norte, our project proposes a solution to provide Filipinos with clean water by creating an ingenious product, the Epione Solar Still. The product makes use of natural occurrences (evaporation of water), and adapts and incorporates the technology and mechanism behind the kidney dialysis machine to provide Filipinos with nutrient-enriched water without polluting their environment. The product will be located near water bodies where seawater is abundant to act as a source of clean water to the Filipinos.

2.2 Objectives of Project

To operationalise our aim, our objectives are to:

Design “Epione Solar Still”

Conduct interviews with:

Masoud Arfand, from Department of Mechanical Engineering, Najafabad Branch, Islamic Azad University to determine the projected percentage of water that Epione Solar Still can produce and the number of people it can provide for.

Qiaoqiang Gan, electrical engineer from Sunny Clean Water (his team innovated the technology of using fibre-rich paper is coated with carbon black to make process of water purification using the soalr still faster and more cost-friendly) to determine amount of time Epione Solar Still needed to produce sufficient water needed to support Fillipinos in Tacloban, Leyte as Epione Solar Still is a short-term disaster relief solution.

Dr Nathan Feldman, Co-Founder of HopeGel, of EB Performance, LLC to determine significant impact of nutrients-infused water to boost immunity of victims of natural disaster. (Project Medishare, n.d)

Review the mechanism and efficiency of using a solar still to source clean and nutrient-rich water for Filipinos.

3. Project Proposal

Investment into purification of water contamination in the form of disaster relief, which can provide Filipinos with nutrients to boost their immunity in times of disaster and limit the number of deaths that occur due to the consumption of contaminated water during a crisis.

3.1 Overview of Project

Our group proposes to build a solar distillation plant (Figure _) within a safe semi-underground bunker. The bunker will contain a generator to power certain parts of the plant. Then, seawater will be fed into the still via underground pipes from the sea surrounding the southern part of Tacloban. The purified water produced by the distillation process will be infused with nutrients to boost the immunity of disaster victims once consumed. Hence, not only will our distillation plant be able to produce potable water, it will also be nutritious so as to boost victims’ immunity in times of natural calamities. Potable water will then be distributed in drums and shared among Filipinos using .

Figure _: Mechanism of our solar distillation plant, Epione Solar Still

3.2 Phase 1: Water Purification System

3.2.1 Water extraction from the sea

Still is located near the sea where seawater is abundant. Seawater is extracted from low-flow open sea (Figure _) and then pumped into our solar still.

Figure _: Intake structure of seawater (Seven Seas Water Corporation, n.d.)

3.2.2 Purification of Seawater

Solar energy heats up the water in the solar still. The water evaporates, and condenses on the cooler glass surface of the ceiling of the still. Pure droplets of water slide down the glass and into the collecting basin, where nutrients will diffuse into the water.

Figure 6: Mechanism of Epione Solar Still

3.3 Phase 2: Nutrient Infuser

Using the concept of reverse osmosis (Figure _), a semi permeable membrane separates the nutrients and newly purified water, allowing the vitamins and minerals to diffuse into the condensed water. The nutrient-infused water will be able to provide nourishment, thus making the victims of natural disaster less vulnerable and susceptible to illnesses and diseases due to a stronger immune system. This will help the Filipinos in Tacloban, Leyte quickly get back on their feet after a natural disaster and minimise the death toll as much as possible after a natural disaster befalls.

Figure _: How does reverse osmosis work (Water Filter System Guide, n.d.)

Nutrient / Mineral

Function

Upper Tolerable Limit (The highest amount that can be consumed without health risks)

Vitamin A

Helps to form and maintain healthy teeth, bones, soft tissue, mucus membranes and skin.

10,000 IU/day

Vitamin B3 (Niacin)

Helps maintain healthy skin and nerves

Has cholesterol-lowering effects

35 mg/day

Vitamin C

(Ascorbic acid, an antioxidant)

Promotes healthy teeth and gums.

Helps the body absorb iron and maintain healthy tissue.

Promotes wound healing.

2,000 mg/day

Vitamin D

(Also known as “sunshine vitamin”, made by the body after being in the sun).

Helps body absorb calcium.

Helps maintain proper blood levels of calcium and phosphorus

1,000 micrograms/day (4,000 IU)

Vitamin E

(Also known as tocopherol, an antioxidant)

Plays a role in formation of red blood cells.

1,500 IU/day

Figure _: Table of functions and amount of nutrients that will be diffused into our Epione water. (WebMD, LLC, 2016)

3.4 Phase 3: Distribution of water to households in Tacloban, Leyte

Potable water will be collected into drums (Figure _) of 100 litres in capacity each, which would suffice 50 people since the average intake of water is 2 litres per person per day. These drums will then be distributed to the tent cities in Tacloban, Leyte, our targeted area, should a natural disaster befall. Thus, locals will get potable water within their reach, which is extremely crucial for their survival in times of natural calamities.

Figure _: Rain barrels will be used to store the purified and nutrient-infused water (Your Easy Garden, n.d.)

3.5 Stakeholders

3.5.1 The HopeGel Project

HopeGel is a nutrient and calorie-dense protein gel designed to aid children suffering from malnutrition caused by severe food insecurity brought upon by draughts (Glenroy Inc., 2014). HopeGel has been distributed in Haiti where malnutrition is the number one cause of death among children under five mainly due to the high frequency of natural disasters that has caused much destruction to the now impoverished state of Haiti. (Figure _) The implementation of Epione Solar Still by this company helps it achieve its objective to address the global issue of severe acute malnutrition in children as most victims of natural disasters lack the nourishment they need (HopeGel, n.d.)

Figure _: HopeGel, a packaged nutrient and calorie-dense protein gel (Butschli, HopeGel, n.d.)

3.5.2 Action Against Hunger (AAH)

Action Against Hunger is a relief organisation that develops and carries out programme for countries in need regarding nutrition, health, water and food security (Action Against Hunger, n.d) (Figure _). AAH also provides programs to be better prepared for disasters which aims to anticipate and prevent humanitarian crisis (GlobalCorps, n.d.) With 40 years of expertise, helping 14.9 million people across more than 45 countries, AAH is no stranger to humanitarian crises. The implementation of Epione Solar Still by this company helps it achieve its aim of saving lives by extending help to Fillipinos in Tacloban, Leyte suffering from deprivation of a basic need due to water contamination caused by disaster relief through purifying and infusing nutrients into seawater.

Figure _: Aims and Missions of Action Against Hunger (AACH, n.d.)

2017-7-11-1499736147

Analyse the use of ICTS in a humanitarian emergency

INTRODUCTION

The intention of writing this essay is to analyse the use of ICTS in a humanitarian emergency. The specific case study we have discuss in this essay is Multi-level functionality of social media in the aftermath of the Great East Japan Earthquake written by Jung, J., and Moro, M. 2014. This report emphasis on the benefits of social media networks like twitter and face book can be used to spread and gather important information in emergency situations rather than solely utilised as a social network platform. ICTs has changed the way humans gather information during the disasters and social media specially twitter became important source of information in these disasters.

Literature Review

The case studies of using ICTs in a humanitarian emergency can have either technically rational perspective or socially embedded perspective. Technically rational perspective means what to do and how to achieve the given purpose, it is a prescription for design and action. Socially embedded means it focuses on the particular case and process of work is affected by the culture, area and human nature. In this article, we have examined different humanitarian disasters cases in which ICTS played a vital role to see if author consider technically rational perspective or socially embedded perspective.

In the article “Learning from crisis: Lessons in human and information infrastructure from the World Trade Centre response” by (Dawes, Cresswell et al. 2004) author adopts technical/rational perspective. 9/11 was very big incident and no one was ready to deal with this size of attack but as soon as it happened procedure start changing rapidly. Government, NGO and disaster response unit start learning and made new prescription, which can be used universally and in any size of disaster. For example, the main communication structure was damaged which was supplied by Verizon there were different communication suppliers suppling their services but they all were using the physical infrastructure supplied by Verizon. So VOIP was used for communication between government officials and in EOC building. There were three main areas where the problems were found and then new procedure adopt in the response of disaster. The three main areas were technology, information and inter layered relationships between the Ngo’s, Government and the private sector. (Dawes, Cresswell et al. 2004).

In the article “Challenges in humanitarian information management and exchange: Evidence from Haiti,” (Altay, Labonte 2014) author adopts socially embedded perspective. Haiti earthquake was one of the big disaster killing 500000 people and displacing at least 2 million. Around 2000 organisation went in for help but there was no coordination between NGO`s and government for the humanitarian response. Organisation didn’t consider local knowledge they assumed that there is no data available. All the organisations had different standards and ways to do work so no one followed any prescription. Technical aspect of HIME (humanitarian information management and exchange) wasn’t working because all the members of humanitarian relief work wasn’t sharing any humanitarian information. (Altay, Labonte 2014)

In the article, Information systems innovation in the humanitarian sector,” Information Technologies and International Development” (Tusiime, Byrne 2011) author adopts socially embedded perspective. Local staff was hired. They didn’t have any former experience or knowledge to work with such a technology, which slow down the process of implementing new technology. Staff wanted to learn and use new system but the changes were done on such a high pace that made staff overworked and stress, which made them loose the interest in the innovation. The management decided to use COMPAS as a new system without realizing that it’s not completing functional and it still have lots of issues but they still went ahead with it. When staff start using and found the problems and not enough technical support was supplied then they didn’t have any choice and they went back to old way of doing things (Tusiime, Byrne 2011). The whole process was effected by how the work is done in specific area and people behaviours.

In the article “Multi-level functionality of social media in the aftermath of the Great East Japan Earthquake,” (Jung, Moro 2014) author adopts technically rational perspective. In any future humanitarian disaster situation, social media can be used as an effective source of communication method conjunction with mass media. After the disaster twitter was used more as a spreading and gathering information source instead of using as social media platform.

In the article “Information flow impediments in disaster relief supply chains,” Journal of the Association for Information Systems,10(8), pp. 637-660.(Day, Junglas et al. 2009) author proposed development of IS for information sharing based on hurricane Katrina. Author adopted TR perspective because need of IS development for information flow within and outside of organisation is essential. This developed IS will help to manage complex supply chain management. Supply chain management in disaster situation is challenging as compare to traditional supply chain management. Supply chain management IS should be able to cater all types of dynamic information, suggested Day, Juglas and Silva (2009).

Case study Description:

On the 11 march 2011 at the scale of 9.00 magnitude hit north-eastern part of japan. This was followed by tsunami. Thousands of people lost their lives and infrastructure was completely damaged in that area (Jung, Moro 2014). Tsunami wiped off two towns of the maps and the costal maps had to be redrawn (Acar, Muraki 2011). On the same day of earth quake cooling system in nuclear reactor no 1 in Fukushima failed because of that nuclear accident Japanese government issued nuclear emergency. On the evening of the earthquake Japanese government issued evacuation order for 3 km area around reactor (Jung, Moro 2014). On March 12 hydrogen explosion occurred in the nuclear reactor because of failed cooling system which is followed by another explosion after 2 days on March 14. The area of evacuation was 3 km in the start but was increased to 20 km so avoid any nuclear radiation. This was one of the big nuclear disaster for the country so it was hard for the government to access the scale of the disaster. As the government officials, didn’t came across this kind situation before and couldn’t estimate the damage occurred because of incident. Government officials were adding more confusion in people with their unreliable information. They declare the accident level as 5 on the international nuclear scale but later they changed it to 7 which was highest on international nuclear scale. Media reporting was also confusing the public. The combination of contradicting information from government and media increase the level of confusion in the public. In the case of disaster Mass media is always the main source of information normally they discontinue their normal transmission and focus on the disaster. Their most of the airtime is devoted for the disaster so they can keep the people update about the situation. Normally mass media provides very reliable information in humanitarian disaster situation but in the case of japan disaster media was contradicting each other news e.g. international media was contradicting the news from local media as well as local government so people start losing faith in the mass media and start relying on different source to get information. Second reason was that the mass media was traditional way of gathering information and because of changes in technology people start using mobile phone and internet. Third main reason people start looking to get the information from different mean because the infrastructure for mass media was damage and lot of people cannot access the services of Television, so they start depending on video streaming sites e.g. ustream and YouTube. People start using twitter on big scale to spread and gather news. There was 30 percent of users increased on twitter within first week of disaster and 60 percent of twitter user thinks that it was useful for gather or spread information.

Case Study Analysis:

Twitter is one of the social media platform and micro blogging website, you can have 140 character in one tweet. It is different from other social media plate form because any one can follow you and they don’t need your authorization. Only register member can tweet but to read a message registration is not required. The author of “Multi-level functionality of social media in the aftermath of the Great East Japan Earthquake,” (Jung, Moro 2014) discuss about the five functionalities of twitter by the help of conceptual model of multi-level social media. The following figure describes the five primary function model in twitter very clearly.

Fig No 1 Source: (Jung, Moro 2014)

The five functionality was derived on survey and review of selected twitter timelines.

The first function was having tweets between individual it’s also known as interpersonal communication with others. It is micro level of conceptual model, in this level people from country and outside of a country were connecting other people who were is the affected area. The most of tweets were for checking safety of people that they are safe after the disaster, to inform your love ones if you were at affected area and needs any help or to inform people that you are safe. In the first three days high percentage of tweets were from micro level communication channel.

The second function was having communication channel for local organisation, local government and local media. It is meso level of conceptual model in this communication channel local governments open new accounts and re activate accounts which wasn’t used for a while to keep their local residents informed, the follower of twitter accounts increased very fast. People have understand the importance of social media and benefits of it after the disaster when the infrastructure was damaged and they were having electricity cut out but they were still able to get the information about the disasters and tsunami warnings. Local government and local media used twitter accounts to give different alerts and news e.g. the alert of tsunami was issued on twitter and after tsunami the reports of damage was released on twitter. Local media open new twitters channels and kept people informed about situation. Different organisation e.g. embassies of different countries used twitter to keep their nationals informed about situation about disaster and this was best way of communication between embassies and their nationals. Nationals can even let their embassy that they are struck in affected area and they need any help because they can be in very vulnerable situation as they are not in their country.

The third function was having communication of Mass media which is known as Macro level. Mass media used social platform to broadcast their news because the infrastructure was damage and people in effected area couldn’t access their broadcast. There were some people who were not in the country so they couldn’t access the local mass media news on television so they watching news on video streaming website as the demand increased most of mass media open the accounts on social media to fulfil the requirements. They start broadcasting their news on video streaming websites like YouTube, Ustream. Mass media was giving news updates several times a day on twitter as well and lot of people who were reading it also was retweeting them so information was spreading on very high speed.

The fourth function was information sharing and gathering which is known as cross level. Individual used social media to get the information about earthquake, tsunami and nuclear accident. When someone try to find information they come across the tweets which were for micro level, meso level and macro level. This level is great use when you are looking for help and you want to know different people opinions if they were in that situation what would they have done. The research done on the twitter time line proofs that on the day of earthquake people were tweeting regarding the shelters available and information about transport (Jung, Moro 2014).

The fifth function was direct channels between individuals and the mass media, government and the public. This is also consider as cross level. In this level individual could inform government and mass media about the situation of effected area because of disaster there were some places where government and mass media couldn’t reach, so they didn’t know the situation. Mayor of Minami-soma city which was 25 miles away from Fukushima used you tube to tell the government the threat of radiation to his city, the video went viral and Japanese government have international pressure to evacuate the city. (Jung, Moro 2014)

Reflection:

There was gradually change in use of social media to use a communication tool instead of social media platform in event of disaster. The multi-level functionality is one of the important characteristic which connects it very well with existing media. This is complete prescription which can be used in and after any kind of disaster. Social media can be used with other media as an effective communication methods to prepare for emergency in any future disaster situation.

Twitter played a big role in the communication in the disaster in japan. It was used to spread information, gather information about earthquake, tsunami and nuclear reactor accident. It was used to help request, issue warning about earthquake, tsunami and nuclear reactor accident. It was also used for condolences. Twitter has lot of benefits but it has some drawbacks which has to be rectify. The biggest issue in tweets are unreliability, anyone can tweet any information and there is no check and balance on it, only the person who do that tweet is responsible for the authentic information. There is no control on false information and it spreads so fast that it can create anxiety in people because of contradicted information e.g. if the false information about the range of radiation was released by individual and retweets by other individual who didn’t had any knowledge about the effect of radiation and nuclear accident it would had caused a panic in people. In the case of disaster, it is very important that reliable and right information is released.

Information system can play vital role in humanitarian disasters in all aspects. It can be used in the better communication, it can be used to improve the efficiency and accountability of the organisation. The data will be available widely in the organisation so it can have monitoring on the finances. It helps to coordinate different operation in organisations e.g. transport, supply chain management, logistics, finance and monitoring.

Social media has played a significant role in communicating, disseminating and storing data related to disasters. There is a need of control of that information being spread over the social media since not all type of information is authentic or verified.

IS based tools needs to be developed for disaster management in order to get best result from varied range of data extracted from social media and take necessary action for the wellbeing of people in disaster area.

The outcome of using purpose built IS, will be supportive in making decisions to develop strategy to deal with the situation. Disaster management team will be able to analyse the data in order to train the team for a disaster situation.

2017-1-12-1484253744

Renewable energy in the UK: essay help

The 2014 IPCC report stated that anthropogenic emissions of greenhouse gases have led to unprecedented levels of carbon dioxide, methane and nitrous oxide in the environment. The report also stated that the effect of greenhouse gases is extremely likely to have caused the global warming we have witnessed since the 20th century.

The 2018 IPCC report set new targets, aiming to limit climate change to a maximum of 1.5°C. To reach this, we will need zero CO₂ emissions by the year 2050. Previous IPCC targets of 2°C change allowed us until roughly 2070 to reach zero emissions. This means government policies will have to be reassessed and current progress reviewed in order to confirm whether or not the UK is capable of reaching zero emissions by 2050 on our current plan.

Electricity Generation

Fossil fuels are natural fuels formed from the remains of prehistoric plant and animal life. Fossil fuels (coal, oil and gas) are crucial in any look at climate change as when burned they release both carbon dioxide (a greenhouse gas) and energy. Hence, in order to reach the IPCC targets the UK needs to drastically reduce its usage of fossil fuels, either through improving efficiency or by using other methods of energy generation.

Whilst coal is a cheap energy source used to generate approximately 40% of the world’s electricity , it’s arguably the most damaging to the environment as coal releases more energy into the atmosphere in relation to energy produced than any other fuel source. Coal power stations generate electricity by burning coal in a combustion chamber and using the heat energy to transform water to steam which turns the propeller-like blades within the turbine. A generator (consisting of tightly-wound metal coils) is mounted at one end of the turbine and when rotated at a high velocity through a magnetic field, generates electricity. However the UK has made great claims to fully eradiate the use of coal in electricity generation by 2025. These claims are well substantiated by the UK’s rapid decline in coal use. In 2015 coal accounted for 22% of electricity generated in the UK, this was down to only 2% by the second quarter of 2017 and in April 2018 the UK even managed to go 72 hours powered without coal.

Natural gas became a staple of British electrical generation in the 1990s, when the Conservative Party got into power and privatised the electrical supply industry. The “Dash for gas” was triggered by legal changes within the UK and EU allowing for greater freedom to use gas in electricity generation.

Whilst natural gas emits less CO₂ than coal, it emits far more methane. Methane doesn’t remain in the atmosphere as long but it traps heat to a far greater extent. According to the World Energy Council methane emissions trap 25 times more heat than CO₂ over a 100 year timeframe.

Natural gas produces electrical energy in a gas turbine. Natural gas is mixed with the hot air and burned in a combustor. The hot gas then pushes turbine blades and as in coal plant, the turbine is attached to a generator, creating electricity. Gas turbines are hugely popular as they are a cheap source of energy generation and they can quickly be powered up to respond to surges in electrical demand.

Combined Cycle Gas Turbines (CCGT) are an even better source of electrical generation. Whilst traditional gas turbines are cheap and fast-reacting, they only have an efficiency of approximately 30%. Combined cycle turbines, however, are gas turbines used in combination with steam turbines giving an efficiency of between 50 and 60%. The hot exhaust from the gas turbine is used to create steam which rotates turbine blades and a generator in a steam turbine. This allows for greater thermal efficiency.

Nuclear energy is a potential way forward as no CO₂ is emitted by Nuclear power plants. Nuclear plants aim to capture the energy released by atoms undergoing nuclear fission. In nuclear fission, nuclei absorb neutrons as they collide thus making an unstable nucleus. The unstable nucleus will then split into fission products of smaller mass and emit two or three high speed neutrons which can then collide with more nuclei, making them unstable thus creating a chain reaction. The heat energy produced by splitting the atom is first converted can be used to produce steam which will be used by a turbine generator to produce electricity.

Currently, 21% of electricity generated in the UK comes from nuclear energy. In the 1990s, 25% of electricity came from nuclear energy but gradually old plants have been retired. By 2025, UK nuclear power could half. This is due to a multitude of reasons. Firstly, nuclear fuel is expensive in comparison to gas and coal. Secondly, nuclear waste is extremely radioactive and so must be dealt with properly. Also, in light of tragedies such as Chernobyl and Fukushima, much of the British public expressed concerns surrounding Nuclear energy with the Scottish government refusing to open more plants

In order to lower our CO₂ emissions it is crucial we also utilise renewable energy. The UK currently gets very little of its energy from renewable sources but almost all future plans place a huge emphasis on renewables.

The UK has great wind energy potential as the nation is the windiest country in the EU with 40% of the total wind that blows across the EU.

Wind turbines are straightforward machinery; the wind turns the turbine blades around a rotor which is connected to the main shaft which spins a generator, creating electricity. In 2017, onshore wind generated enough energy to power 7.25 million homes a year and generated 9% of the UK’s electricity. However, despite the clear benefits of clean, renewable energy, wind energy is not without its problems. Firstly, it is an intermittent supply – the turbine will not generate energy when there is no wind. Also it has been opposed by members of the public for affecting the look of the countryside and bird fatalities. These problems are magnified by the current conservative government’s stance on wind energy who wish to limit onshore wind farm development despite public opposition to this “ban”.

Heating and Transport

Currently it is estimated a third of carbon dioxide (CO2) emissions in the UK are accounted for in the heating sector. 50% of all heat emissions in the UK exist for domestic use, consequently making it the main source of CO2 emissions in the heating sector. Around 98% of domestic heating is used for space and water heating. The government has sought to reduce the emissions from domestic heating alone by issuing a series of regulations on new boilers. Regulations state as of 1st April 2005 all new installations and replacements of boilers are required to be condensing boilers. As well as CO2 emissions being much lower, condensing boilers are around 15-30% more efficient than older gas boilers. Reducing heat demand has also been an approach taken to reduce emissions. For instance, building standards in the UK have set higher levels of required thermal insulations of both domestic and non-domestic buildings when refurbishing and carrying out new projects. These policies are key to ensure that both homes are buildings in industry are as efficient as possible when it comes to conserving heat.

Although progress is being made in terms of improving current CO2 reducing systems, the potential for significant CO2 reductions rely upon low carbon technologies. Highly efficient technologies such as the residential heat pump and biomass boilers have the potential to be carbon neutral sources of heat and in doing so could massively reduce CO2 emissions for domestic use . However, finding the best route to a decarbonised future in the heating industry relies upon more than just which technology has the lowest carbon footprint. For instance, intermittent technologies such as solar thermal collectors cannot provide a sufficient level of heat in the winter and require a back-up source of heat making them a less desirable source of heat . Cost is also a major factor in consumer preference. For most consumers, a boiler is the cheapest option for heating. This provides a problem for low carbon technologies which tend to have significantly higher upfront costs . In response to the cost associated with these technologies, the government has introduced policies such as the ‘Renewable Heat Incentive’ which aims to alleviate the expense through paying consumers for each unit of heat produced by low carbon technologies. Around 30% of the heating sector is allocated for industry use, making it the second largest cause of CO2 in this sector . Currently, combined heat and power (CHP) is the main process used to make industrial heat use more efficient and has shown CO2 reductions of up to 30%. Although this is a substantial reduction in CO2, alternative technology has the potential to deliver even higher reductions. For example, the process of carbon capture storage (CCS), has the potential to reduce CO2 emissions by up to 90% . However, CCS is a complex procedure which would require a substantial amount of funding and as a result is not currently implemented for industrial use in the UK.

Although heating is a significant contribution to CO2 emissions in the UK, there is also much needed progress elsewhere. In 2017 it was estimated that 34% of all carbon dioxide (CO2) emissions in the UK were caused by transport and is widely thought to be the sector in which least progress is being made, with only seeing a 2% reduction in CO2 emissions since 1990. Road transport contributes the highest proportion of emissions, more specifically petrol and diesel cars. Despite average CO2 emissions of new vehicles declining, the carbon footprint of the transport industry continues to increase due to the larger number of vehicles in the UK.

In terms of progress, CO2 emissions of new cars in 2017 were estimated to be 33.1% lower than the early 2000s. Although efficiencies are improving, more must be done if we are to conform to the targets set from the Climate Change Act 2008. A combination of decarbonising transport and implementing government legislation is vital to have the potential to meet these demands. New technology such as battery electric vehicles (BEV’s) have the potential to create significant reductions in the transport industry. As a result, a report from the committee of climate change suggests that 60% of all sales of new cars and vans should be ultra-low emission by 2030. However, the likeliness of achieving this is hindered by the constraints of new technologies. For instance, low emission vehicles are likely to have significantly higher costs and lack consumer awareness. This reinforces the need of government support in projecting new technologies and cleaner fuels. To support the development and uptake of low carbon vehicles the government has committed £32 million for the funding of charging infrastructure of BEV’s from 2015-2020 and a further £140 million has been allocated to the ‘low carbon vehicle innovation platform’ which strives to advance the development and research of low emission vehicles. Progress has also been made to make these vehicles more cost competitive through being exempt from taxes such as Vehicle Excise Duty and providing incentives such as plug in grants of up to £3,500. Aside from passenger cars, improvements are also being made to emissions of public transport. The average low emission bus in London could reduce its CO2 emissions by up to 26 tonnes per year subsequently acquiring the governments support in England of the ‘Green Bus Fund’.

Conclusion

In 2017, renewables accounted for a record 29.3% of the UK’s energy generation. This is a vast improvement on previous years and suggests the UK is on track to meet the new IPCC targets although a lot of work still needs to be done. Government policies do need to be reassessed in light of the new targets however. Scotland should reassess its nuclear policy as this might be a necessary stepping stone in reduced emissions until renewables are able to fully power the nation and the UK government needs to reassess its allocation of funding as investment in clean energy is on a current downward trajectory.

Although progress has been made to reduce CO2 emissions in the heat and transport sector, emissions throughout the UK remain much higher than desired. The committee of climate change report to parliament (2015), calls for the widespread electrification of heating and transport by 2030 to help prevent a 1.5 degree rise in global temperature. This is likely to pose as a major challenge and will require a significant increase in electricity generation capacities in conjunction with greater policy intervention to encourage the uptake of low carbon technologies. Although the likelihood of all consumers switching to alternative technologies are sparse, if the government continues to tighten regulations surrounding fossil fuelled technologies whilst the heat and transport industry continue to develop old and new systems to become more efficient this should see significant CO2 reductions in the future.

2018-11-19-1542623986

Is Nuclear Power a viable source of energy?: college application essay help

6th Form Economics project:

Nuclear power, the energy of the future of the 1950s, is now starting to feel like the past. Around 450 nuclear reactors worldwide currently generate 11% of the world electricity, or approximately 2500 TWh in a year, just under the total nuclear power generated globally in 2001 and only 500 TWh more than in 1991. The number of operating reactors worldwide has seen the same stagnation, with an increase of only 31 since 1989, or an annual growth of only 0.23% compared to 12.9% from 1959 to 1989. Most reactors, especially in Europe and North America, where built before the 90s and the average age of reactors worldwide is just over 28 years. Large scale nuclear accidents such as Chernobyl in 1986 or, much more recently, Fukushima in 2011 have negatively impacted public support for nuclear power and helped cause this decline, but the weight of evidence has increasingly suggested that nuclear is safer than most other energy sources and has an incredibly low carbon footprint, causing the argument against nuclear to shift from concerns about safety and the environment to questions about the economic viability of nuclear power. The crucial question that remains is therefore about how well nuclear power can compete against renewables to produce the low carbon energy required to tackle global warming.

The costs of most renewable energy sources have been falling rapidly and increasingly able to outcompete nuclear power as a low carbon option and even fossil fuels in some places; photovoltaic panels, for example, have halved in price from 2008 to 2014. Worse still for nuclear power, it seems that while costs of renewable energy have been falling, plans for new nuclear plants have been plagued with delays and additional costs: in the UK, Hinkley Point C power station is set to cost £20.3bn, making it the world’s most expensive power station, and significant issues in the design have raised questions as to whether the plant will be completed by 2025, it’s current goal. In France, the Flamanville 3 reactor is now predicted to cost three times its original budget and several delays have pushed the start up date, originally set for 2012, to 2020. The story is the same in the US, where delays and extra costs have plagued the construction of the Vogtle 3 and 4 reactors which are now due to be complete by 2020-21, 4 years over their original target. Nuclear power seemingly cannot deliver the cheap, carbon free energy it promised and is being outperformed by renewable energy sources such as solar and wind.

The crucial and recurring issue with nuclear power is that it requires huge upfront costs, especially when plants are built individually, and can only provide revenue years after the start of construction. This means that investment into nuclear is risky, long term and cannot be done well on a small scale, though new technologies such as SMRs (Small Modular Reactors) may change this in the coming decades, making it a much bigger gamble. Improvements in other technologies over the period of time a nuclear plant is built means that is often better for private firms, who are less likely to be able to afford large scale programs enabling significant cost reductions or a lower debt to equity ration in their capital structure, to invest in more easily scalable and shorter term energy sources, especially with subsidies favouring renewables in many developed countries. All of this points to the fundamental flaw of nuclear: that it requires going all the way. Small scale nuclear programs that are funded mostly with debt, that have high discount rates and low capacity factors as they are switched off frequently will invariably have a very high Levelised Cost of Energy (LCOE) as nuclear is so capital intensive.

That said, the reverse is true as well. Nuclear plants have very low operating costs, almost no external costs and the cost of decommissioning a plant are only a small portion of the initial capital cost, even with a low discount rate such as 3%, due to the long lifespan of a nuclear plant and the fact that many can be extended. Operating costs include fuel costs, which are extremely low for nuclear, costing only 0.0049 USD per kWh, and non-fuel operation and maintenance costs which are barely higher at 0.0137 USD per kWh. This includes waste disposal, a frequently cited political issue that has no longer been relevant technically for decades as waste can be reused relatively well and stored on site safely at very low costs simply because the quantity of fuel used and therefore waste produced is so small. The fuel, uranium is abundant and technology enabling uranium to be extracted from sea water would give access to a 60,000 year supply at present rates of consumption so costs from ‘resource depletion’ are also small. Finally, external costs represent a very small proportion of running costs: the highest estimates for health costs and potential accident are at 5€/MWh and 4€/MWh respectively, though some estimates fall to only 0.3€/MWh for potential accidents when past records are adjusted to try and factor in improvements in safety standards; though these vary significantly due to the fact that the total number of reactors is very small.

Nuclear power therefore remains still one of the cheapest ways to produce electricity in the right circumstances and many LCOE (Levelised Cost of Energy) estimates, which are designed to factor in all costs over the life time of a unit to give a more accurate representation of the costs of different types of energy, though they usually omit system costs, point to nuclear as a cheaper energy source than almost all renewables and most fossil fuels at low discount rates.

LCOE costs taken from ‘Projected Costs of Generating Electricity 2015 Edition’ and system costs taken from ‘Nuclear Energy and Renewables (NEA, 2012)’ have been combined by the World Nuclear association to give LCOE for four countries to compare the costs of nuclear to other energy sources. A discount rate of 7% is used, the study applies a $30/t CO2 price on fossil fuel use and uses 2013 US$ values and exchange rates. It is important to bear in mind that LCOE estimates vary widely as many assume different circumstances and they are very difficult to calculate, but it is clear from the graph that nuclear power is more than still viable; being the cheapest source in three of the four countries and third cheapest in the fourth behind onshore wind and gas.

2019-5-13-1557759917

Decision making during the Fukushima disaster

Introduction

On March 11, 2011 a tsunami struck the east coast of Japan, which resulted in a disaster at the Fukushima Daiichi nuclear power plant. During the day commencing the natural disaster many decisions were made with regards to managing the crisis. This paper will examine these decisions made during the crisis. The Governmental Politics Model, a model designed by Allison and Zelikow (1999), will be adopted to analyse the events. Therefore, the research question of this paper is: To what extent does the Governmental Politics Model explain the decisions made during the Fukushima disaster.

First, this paper will lay the theoretical basis for an analysis. The Governmental Politics Model and all crucial concepts within it are discussed. Then a conscription of the Fukushima case will follow. Since the reader is expected to already have general knowledge regarding the Fukushima Nuclear disaster the case description will be very brief. With the theoretical framework and case study a basis for the analysis is laid. The analysis will look into the decisions government and Tokyo Electric Power Company (TEPCO) officials made during the crisis.

Theory

Allison and Zelikow designed three theories to understand the outcomes of bureaucracies and decision making in the aftermath of the Cuban Missile Crisis in 1962. The first theory to be designed was the Rational Actor Model. This model focusses on the ‘logic of consequences’ and has a basic assumption of rational actions of a unitary actor. The second theory designed by Allison and Zelikow is the Organizational Behavioural Model. This model focusses on the ‘logic of appropriateness’ and has a main assumption of loosely connected allied organizations (Broekema, 2019).

The third model thought of by Allison and Zelikow is the Governmental Politics Model (GPM). This model reviews the importance of power in decision-making. According to the GPM decision making has not to do with rational/unitary actors or organizational output but everything with a bargaining game. This means that governments make decisions in other ways, according to the GPM there are four aspects to this. These aspects are: the choices of one, the results of minor games and of central games and foul-ups (Allison & Zelikow, 1999).

The following concepts are essential in the GPM. First, it is important to note that power in government is shared. Different institutions have independent bases and, therefore, power is shared. Second, persuasion is an important factor in the GPM. The power to persuade differentiates power from authority. Third, bargaining according to the process is identified, this means there is a structure in the bargaining processes. Fourth, power equals impact on outcome is mentioned in the Essence of Decision making. This means that there is a difference between what can be done and what is actually done, and what is actually done has to do with the power involved in the process. Lastly, intranational and international relations are of great importance to the GPM. These relations are intertwined and involve a vast set if international and domestic actors (Allison & Zelikow, 1999).

Not only the five previous concepts are relevant for the GPM. The GPM is inherently based on group decisions, in this type of decision making Allison and Zelikow identify seven factors. The first factor is a positive one, group decisions, when met by certain requirements create better decisions. Secondly, the agency problem is identified, this problem includes information asymmetric and the fact that actors are competing over different goals. Third, it is important to identify the actors in the ‘game’. This means that one has to find out who participates in the bargaining process. Fourth, problems with different types of decisions are outlined. Fifth, framing issues and agenda setting is an important factor in the GPM. Sixth, group decisions are not necessarily positive, they can lead to groupthink easily. This is a negative consequence and means that no other opinions are considered. Last, the difficulties in collective actions is outlined by Allison and Zelikow. This has to do with the fact that the GPM does not consider unitary actors but different organizations (Allison & Zelikow, 1999).

Besides the concepts mentioned above the GPM consists of a concise paradigm too. This paradigm is essential for the analysis of the Fukushima case. The paradigm exists of six main points. The first main point is the fact that decisions are the result of politics, this is the GPM and once again stresses the fact that decisions are the result of bargaining. Second, as said before, it is important to identify the players of the political ‘game’. Furthermore, one has to identify their preferences and goals and what kind of impact they can have on the final decision. Once this is analysed, one has to look at what the actual game is that is played. The action channels and rules of the game can be determined. Third, the ‘dominant inference pattern’ once again goes back to the fact that the decisions are the result of bargaining, but this point makes clear that differences and misunderstandings have to be taken into account. Fourth, Allison and Zelikow identify ‘general propositions’ this term includes all concepts examined in the second paragraph of the theory section of this paper. Fifth, specific propositions are looked at, these specify to decisions on the use of force and military action. Last, is the importance of evidence. When examining crisis decision making documented timelines and for example, minutes or other account are of great importance (Allison & Zelikow, 1999).

Case

In the definition of Prins and Van den Berg (2018) the Fukushima Daiichi disaster can be regarded as a safety case, this is because it was an unintentional event that caused harm to humans.

The crisis was initiated by an earthquake of 9.0 on the Richter scale which was followed by a tsunami, which waves reached a height of 10 meters. Due to the earthquake all external power lines, which are needed for cooling the fuel rods, were disconnected. Countermeasures for this issue were in place, however, the water walls were unable to protect the nuclear plant from flooding. This caused the countermeasures, the diesel generators, to be inadequate (Kushida, 2016).

Due to the lack of electricity, the nuclear fuel rods were not cooled, therefore, a ‘race for electricity’ started. Eventually the essential decision to inject sea water was made. Moreover, the situation inside the reactors was unknown. Meltdowns in reactors 1 and 2 already occurred. Because of explosions risks the decision to vent the reactors was made. However, hydrogen explosions materialized in reactors 1,2 and 4. This in turn led to the exposure of radiation to the environment. To counter the disperse of radiation the decision to inject sea water to the reactors was made (Kushida, 2016).

Analysis

This analysis will look into the decision or decisions to inject seawater in the damaged reactors. First, a timeline of the decisions will be outlined to further build on the case study above. Then the events and decisions made will be paralleled to the GPM paradigm with the six main points as described in the theory.

The need to inject sea water arose after the first stages as described in the case study passed. According to Kushida government officials and political leaders began voicing the necessity of injecting the water at 6:00 p.m., the day after the earthquake, on March 12. It would according to these officials have one very positive outcome, namely, the cooling of the reactors and the fuel pool. However, the use of sea water might have negative consequences too. It would ruin the reactors because of the salt in the sea water and it would produce vast amounts of contaminated water which would be hard to contain (Kushida, 2016). TEPCO experienced many difficulties with cooling the reactors, as is described in the case study, because of the lack of electricity. However, they were averse to injecting sea water into the reactors since this would ruin them. Still, after the first hydrogen explosion occurred in reactor one TEPCO plant workers started the injection of sea water in this specific reactor (Holt et al., 2012). A day later, on March 13, sea water injection started in reactor 3. On the 14th of March, seawater injection started in reactor 2 (Holt et al., 2012).

When looking at the decisions made by the government or TEPCO plant workers it is crucial to consider the chain of decision making by TEPCO leadership too. TEPCO leadership was in the first instance not very positive towards injecting seawater because of the earlier mentioned disadvantages, the plant would become unusable in the future and vast amounts of contaminated water would be created. Therefore, the government had to issue an order to TEPCO to start injecting seawater. They did so at 8:00 p.m. on 12 March. However, Yoshida, the Fukushima Daiichi Plant Manager already started injecting seawater at 7:00 p.m. (Kushida, 2016).

As one can already see different interests were at play and the outcome of the eventual decision can well be a political resultant. Therefore, it is crucial to examine the chain of decisions through the GPM paradigm. The first factor of this paradigm concerns decisions as a result of bargaining, this can clearly be seen in the decision to inject seawater. TEPCO leadership initially was not a proponent of this method, however, after government officials ordered them to execute the injection they had no choice. Second, according to the theory, it is important to identify the players of the ‘game’ and their goals. In this instance these divisions are easily identifiable, three different players can be pointed out. The different players are the government, TEPCO leadership and Yoshida, the plant manager. The Government has as a goal to keep their citizens safe during the crisis, TEPCO wanted to maintain the reactor as long as possible, whereas, Yoshida wanted to contain the crisis. This shows there were conflicting goals in that sense.

To further apply the GPM to the decision to inject seawater one can review the comprehensive ‘general proposition’. In this part miscommunication is a very relevant factor. Miscommunication was certainly a big issue in the decision to inject seawater. As said before Yoshida, already started injecting seawater before he received approval from his chiefs. One might even wonder whether or not there was a misunderstanding of the crisis by TEPCO leadership because of the fact that they hesitated to inject seawater necessary to cool the reactors. It can be argued that this hesitation constitutes a great deal of misunderstanding of the crisis since there was no plant to be saved anymore at the time the decision was made.

The fifth and sixth aspect of the GPM paradigm are less relevant to the decisions made. This is because ‘specific proposition’ refers to the use of force, which was not an option in dealing with the Fukushima crisis. The Japanese Self-Defence forces were dispatched to the plant; however, this was to provide electricity (Kushida, 2016). Furthermore, the sixth aspect, evidence is not as important in this case since many scholars, researchers and investigators have written to a great extent about what happened during the Fukushima crisis, more than sufficient information is available.

The political and bargaining game in the decision to inject seawater into the reactors is clearly visible. The different actors in the game had different goals, however, eventually the government won this game and the decision to inject seawater was made. Even before that the plant manager already to inject seawater because the situation was too dire.

Conclusion

This essay reviewed decision making during the Fukushima Daiichi Nuclear Power Plant disaster on the 11th of March 2011. More specifically the decision to inject seawater into the reactors to cool them was scrutinized. This was done by using the Governmental Politics Model. The decision to inject seawater into the reactors was a result of a bargaining game and different actors with different objectives played the decision-making ‘game’.

2019-3-18-1552918037

Tackling misinformation on social media: college essay help online

As the world of social media expands, the ratio of miscommunication rises as more organisations hop on the bandwagon of utilising the digital realm to their advantage. Twitter, Facebook, Instagram, online forums and other websites become the pinnacle of news gathering for many individuals. Information becomes easily accessible to all walks of life meaning that people are becoming more integrated about real life issues. Consumers absorb and take information in as easy as ever before which proves to be equally advantageous and disadvantageous. But, There is an evident boundary in which the differentiation of misleading and truthful information is hard to cross without research on the topic. The accuracy of public information is highly questionable which could easily lead to problems. Despite there being a debate about source credibility in any platform, there are ways to tackle the issue through “expertise/competence (i. e., the degree to which a perceiver believes a sender to know the truth), trustworthiness (i. e., the degree to which a perceiver believes a sender will tell the truth as he or she knows it), and goodwill”. (Cronkhite & Liska (1976)) Which is why it has become critical for this to be accurate, ethical and reliable for the consumers. Verifying information is important regardless of the type of social media outlet. This essay will be highlighting the importance of why information need to fit this criteria.

By putting out credible information it prevents and reduces misconception, convoluted meanings and inconsistent facts which reduce the likeliness of issues surfacing. This in turn saves time for the consumer and the producer. The presence of risk raises the issue of how much of this information should be consumed by the public. The perception of source credibility becomes an important concept to analyse within social media, especially in terms of crisis where rationality reduces and the latter often just take the first thing that is seen. With the increasing amount of information available through newer channels, the idea of releasing information from professionals of the topic devolve away from the producers and onto consumers. (Haas & Wearden, 2003) Many of the public is unaware that this information is prone to bias and selective information sharing which could communicate the actual facts much differently. One such example is the incident of Tokyo Electric Power Co.’s Fukushima No.1 nuclear power plant in 2011, where the plant experienced triple meltdowns. There is a misconception floating around that the food exported from Fukushima is too contaminated with radioactive substances making them unhealthy and unfit to eat. But the truth is that this isn’t the case when strict screening reveals that the contamination is below the government standard to pose a threat. (arkansa.gov.au) Since then, products shipped from Fukushima have dropped considerably in prices and have not recovered since 2011 forcing retailers into bankruptcy. (japantimes.co.jp) But thanks to the use of social media and organisations releasing information out into the public, Fukushima was able to raise funds and receive help from other countries. For example the U.S. sending $100,000 and China sending emergency supplies as assistance. (theguardian.com) This would have been impossible to achieve without the use of sharing credible, reliable and ethical information regarding the country and social media support spotlighting the incident.

Accurate, ethical and reliable information open the pathway for producers to secure a relationship with the consumers which can be used to strengthen their own businesses and expand their industries further whilst gaining support from the public. The idea is to have a healthy relationship without the air of uneasiness where monetary gains and social earnings increase. Social media playing a pivotal role in deciding the route the relationship falls in. But, When done incorrectly, organisations can become unsuccessful when they know little to nothing about the change of dynamics in consumers and behaviour in the digital landscape. Consumer informedness means that consumers are well informed about products or services available with precision influencing their willingness in decisions. This increase in consumer informedness can instigate change in consumer behaviour. (uni-osnabrueck.de) In the absence of accurate, ethical and reliable information, people and organisations will make terrible decisions with no hesitation. Which leads to losses and steps backwards. As Saul Eslake (Saul-Eslake.com) says, “they will be unable to help or persuade others to make better decisions; and no-one will be able to ascertain whether the decisions made by particular individuals or organisations were the best ones that could have been made at the time”. Recently, a YouTuber named Shawn Dawson made a video that sparked controversy to the company ‘Chuck E. Cheese’ for their pizzas slices that do not look like they belong to the whole pizza. He created a theory that part of the pizzas may have been reheated or recycled from other tables. In response Chuck E. Cheese responded in multiple media outlets to debunk the theory, “These claims are unequivocally false. We prep the dough daily for our made to order pizzas, which means they’re not always perfectly round, but they are still great tasting.” (https://twitter.com/chuckecheeses) It is worth bringing up that no information other than pictures back up the claim that they reused the pizza. The food company has also gone far to create a video showing the pizza preparation. To back as the support, ex-employees spoke up and shared their own side of the story to debunk the theory further. It’s these quick responses that saved what could have caused a small downfall in sale for the Chuck E. Cheese company. (washintonpost.com) This event highlights the importance on the release of information that can fall in favour to whoever utilises it correctly and the effectiveness of credible information that should be taken to heart. Credible information is good and bad especially when it has the support of others whether online or real life. The assumption or guess when there is no information available to base from is called a ‘heuristic value’ which is seen associated with information that has no credibility.

Mass media have been a dominant source of finding information (Murch, 1971). They are generally thought and assumed to provide credible, valuable, and ethical information open to the public (Heath, Liao, & Douglas, 1995). However, along with traditional forms of media, newer media are increasingly available for information seeking and reports. According to PNAS (www.pnas.org), “The emergence of social media as a key source of news content has created a new ecosystem for the spreading of misinformation. This is illustrated by the recent rise of an old form of misinformation: blatantly false news stories that are presented as if they are legitimate . So-called “fake news” rose to prominence as a major issue during the 2016 US presidential election and continues to draw significant attention.” This affects how we as social beings perceive and analyse information we see online compared to real life. Beyond just reducing the intervention’s effectiveness, failing to deduce stories from real to false increase the belief of false content. Leading to biased and misleading content that fool the audience. One such incident is Michael Jackson’s death in June 2009 where he died from acute propofol and benzodiazepine intoxication administered by his doctor, Dr. Murray. (nytimes.com) It was deduced from the public that Michael Jackson was murdered on purpose but the court convicted, Dr. Murray of involuntary murder as the doctor proclaimed that Jackson begged him to give more. A fact that was overlooked by the general due to bias. This underlines how information is selectively picked from the public and not all information is revealed to sway the audience. A study conducted online by Jason and his team (JCMC [CQU]) revealed that Facebook users tended to believe their friends almost instantly even without a link or proper citation to a website to backup their claim. “Using a person who has frequent social media interactions with the participant was intended to increase the external validity of the manipulation.” Meaning information online that can be taken as truth or not is left to the perception of the viewer linking to the idea that information online isn’t credible fully unless it came straight from the source. Proclaiming the importance of credible information to be released.

Information has the power to inform, explain and expand on topics and concepts. But it also has the power to create inaccuracies and confusion which is hurtful to the public and damages the reputation of companies. The goal is to move forward not backwards. Many companies have gotten themselves into disputes because of incorrect information which could have easily been avoided through releasing accurate, ethical and reliable information from the beginning. False Information can start disputes and true information can provide resolution. The public has become less attentive to mainstream news altogether which strikes a problem on what can be trusted. Companies and organisations need their information to be accurate and reliable as much as possible to defeat and reduce this issue. Increased negativity and incivility exacerbate the media’s credibility problem. “People of all political persuasions are growing more dissatisfied with the news, as levels of media trust decline.” (JCMC [CQU]) In 2010, Dannon’s ‘Activia Yogurt’ released an online statement and false advertisement that their yogurt had “special bacterial ingredients.” A consumer named, Trish Wiener lodged a complaint against Dannon. The yogurts were being marketed as being “clinically” and “scientifically” proven to boost the immune system while able to help to regulate digestion. However, the judge saw this statement as unproven. As well as many other products in their line that used this statement in their products. “This landed the company a $45 million class action settlement.” (businessinsider.com) it didn’t help that Dannon’s prices for their yogurt was inflated compared to other yogurts in the market. “The lawsuit claims Dannon has spent “far more than $100 million” to convey deceptive messages to U.S. consumers while charging 30 percent more that other yogurt products.” (reuters.com) This highlights how inaccurate information can cost millions of dollars to settle and resolve. However it also showed how the public can easily evict irresponsible producers from their actions and give leeway to justice.

2019-5-2-1556794982

Socio-political significance of Turkey’s emergent neo-Ottoman cultural phenomenon

Over the last decade, Turkey’s cultural sphere has witnessed a motto of Ottomania—a term describing the recent cultural fervor for everything Ottoman. Although this neo-Ottoman cultural phenomenon, is not entirely new since it had its previous cycle back in the 1980s and 1990s during the heyday of Turkey’s political Islam, it now has a rather novel characteristic and distinct pattern of operation. This revived Ottoman craze is discernable in what I call the neo-Ottoman cultural ensemble—referring to a growing array of Ottoman-themed cultural productions and sites that evoke Turkey’s Ottoman-Islamic cultural heritage. For example, the celebration of the 1453 Istanbul conquest no longer merely takes place as an annual public commemoration by the Islamists,[1] but has been widely promulgated, reproduced, and consumed into various forms of popular culture such as: the Panorama 1453 History Museum; a fun ride called the Conqueror’s Dream (Fatih’in Rüyası) at the Vialand theme park; the highly publicized and grossed blockbuster The Conquest 1453 (Fetih 1453); and the primetime television costume drama The Conqueror (Fatih). It is the “banal”, or “mundane,” ways of everyday practice of society itself, rather than the government or state institutions that distinguishes this emergent form of neo-Ottomanism from its earlier phases.[2]

This is the context in which the concept of neo-Ottomanism has acquired its cultural dimension and analytical currency for comprehending the proliferating neo-Ottoman cultural phenomenon. However, when the concept is employed in contemporary cultural debates, it generally follows two trajectories that are common in the literature of Turkish domestic and foreign politics. These trajectories conceptualize neo-Ottomanism as an Islamist political ideology and/or a doctrine of Turkey’s foreign policy in the post-Cold War era. This essay argues that these two conventional conceptions tend to overlook the complexity and hybridity of Turkey’s latest phase of neo-Ottomanism. As a result, they tend to understand the emergent neo-Ottoman cultural ensemble as merely a representational apparatus of the neoconservative Justice and Development Party’s (AKP; Adalet ve Kalkınma Partisi) ideology and diplomatic strategy.

This essay hence aims to reassess the analytical concept of neo-Ottomanism and the emergent neo-Ottoman cultural ensemble by undertaking three tasks. First, through a brief critique of the concept of neo-Ottomanism, I will discuss its common trajectories and limitations for comprehending the latest phase of neo-Ottoman cultural phenomenon. My second task is to propose a conceptual move from neo-Ottomanism to Ottomentality by incorporating the Foucauldian perspective of governmentality. Ottomentality is an alternative concept that I deployed here to underscore the overlapping relationship between neoliberal and neo-Ottoman rationalities in the AKP’s government of culture and diversity. I contend that neoliberalism and neo-Ottomanism are inseparable governing rationalities of the AKP and their convergence has engendered new modes of governing the cultural field as well as regulating inter-ethnic and inter-religious relations in Turkey. And finally, I will reassess the neo-Ottoman cultural ensemble through the analytical lens of Ottomentality. I contend that the convergence of neoliberal and neo-Ottoman rationalities has significantly transformed the relationships of state, culture, and the social. As the cases of the television historical drama Magnificent Century (Muhteşem Yüzyıl) and the film The Conquest 1453 (Fetih 1453) shall illustrate, the neo-Ottoman cultural ensemble plays a significant role as a governing technique that constitutes a new regime of truth based on market mentality and religious truth. It also produces a new subject of citizenry, who is responsible for enacting its right to freedom through participation in the culture market, complying with religious norms and traditional values, and maintaining a difference-blind and discriminatory model of multiculturalism.

A critique of neo-Ottomanism as an analytical concept

Although the concept of neo-Ottomanism has been commonly used in Turkish Studies, it has become a loose term referring to anything associated with the Islamist political ideology, nostalgia for the Ottoman past, and imperialist ambition of reasserting Turkey’s economic and political influence within the region and beyond. Some scholars have recently indicated that the concept of neo-Ottomanism is running out of steam as it lacks meaningful definition and explanatory power in studies of Turkish politics and foreign policy.[3] The concept’s ambiguity and impotent analytical and explanatory value is mainly due to the divergent, competing interpretations and a lack of critical evaluation within the literature.[4] Nonetheless, despite the concept being equivocally defined, it is most commonly understood in two identifiable trajectories. First, it is conceptualized as an Islamist ideology, responding to the secularist notions of modernity and nationhood and aiming to reconstruct Turkish identity by evoking Ottoman-Islamic heritage as an essential component of Turkish culture. Although neo-Ottomanism was initially formulated by a collaborated group of secular, liberal, and conservative intellectuals and political actors in the 1980s, it is closely linked to the consolidated socio-economic and political power of conservative middle-class. This trajectory considers neo-Ottomanism as primarily a form of identity politics and a result of political struggle in opposition to the republic’s founding ideology of Kemalism. Second, it is understood as an established foreign policy framework reflecting the AKP government’s renewed diplomatic strategy in the Balkans, Central Asia, and Middle East wherein Turkey plays an active role. This trajectory regards neo-Ottomanism as a political doctrine (often referring to Ahmet Davutoglu’s Strategic Depth serving as the guidebook for Turkey’s diplomatic strategy in the 21st century), which sees Turkey as a “legitimate heir of the Ottoman Empire”[5] and seeks to reaffirm Turkey’s position in the changing world order in the post-Cold War era.[6]

As a result of a lack of critical evaluation of the conventional conceptions of neo-Ottomanism, contemporary cultural analyses have largely followed the “ideology” and “foreign policy” trajectories as explanatory guidance when assessing the emergent neo-Ottoman cultural phenomenon. I contend that the neo-Ottoman cultural phenomenon is more complex than what these two trajectories offer to explain. Analyses that adopt these two approaches tend to run a few risks. First, they tend to perceiveneo-Ottomanism as a monolithic imposition upon society. They presume that this ideology, when inscribed onto domestic and foreign policies, somehow has a direct impact on how society renews its national interest and identity.[7] And they tend to understand the neo-Ottoman cultural ensemble as merely a representational device of the neo-Ottomanist ideology. For instance, Şeyda Barlas Bozkuş, in her analyses of the Miniatürk theme park and the 1453 Panorama History Museum, argues that these two sites represent the AKP’s “ideological emphasis on neo-Ottomanism” and “[create] a new class of citizens with a new relationship to Turkish-Ottoman national identity.”[8] Second, contemporary cultural debates tend to overlook the complex and hybrid nature of the latest phase of neo-Ottomanism, which rarely operates on its own, but more often relies on and converges with other political rationalities, projects, and programs. As this essay shall illustrate, when closely examined, current configuration of neo-Ottomanism is more likely to reveal internal inconsistencies as well as a combination of multiple and intersecting political forces.

Moreover, as a consequence of the two risks mentioned above, contemporary cultural debates may have overlooked some of the symptomatic clues, hence, underestimated the socio-political significance of the latest phase of neo-Ottomanism. A major symptomatic clue that is often missed in cultural debates on the subject is culture itself. Insufficient attention has been paid to the AKP’s rationale of reconceptualizing culture as an administrative matter—a matter that concerns how culture is to be perceived and managed, by what culture the social should be governed, and how individuals might govern themselves with culture. At the core of the AKP government’s politics of culture and neoliberal reform of the cultural filed is the question of the social.[9] Its reform policies, projects, and programs are a means of constituting a social reality and directing social actions. When culture is aligned with neoliberal governing rationality, it redefines a new administrative culture and new rules and responsibilities of citizens in cultural practices. Culture has become not only a means to advance Turkey in global competition,[10] but also a technology of managing the diversifying culture resulted in the process of globalization. As Brian Silverstein notes, “[culture] is among other things and increasingly to be seen as a major target of administration and government in a liberalizing polity, and less a phenomenon in its ownright.”[11] While many studies acknowledge the AKP government’s neoliberal reform of the cultural field, they tend to regard neo-Ottomanism as primarily an Islamist political agenda operating outside of the neoliberal reform. It is my conviction that neoliberalism and neo-Ottomanism are inseparable political processes and rationalities, which have merged and engendered new modalities of governing every aspect of cultural life in society, including minority cultural rights, freedom of expression, individuals’ lifestyle, and so on. Hence, by overlooking the “centrality of culture”[12] in relation to the question of the social, contemporary cultural debates tend to oversimplify the emergent neo-Ottoman cultural ensemble as nothing more than an ideological machinery of the neoconservative elite.

From neo-Ottomanism to Ottomentality

In order to more adequately assess the socio-political significance of Turkey’s emergent neo-Ottoman cultural phenomenon, I propose a conceptual shift from neo-Ottomanism to Ottomentality. This shift involves not only rethinking neo-Ottomanism as a form of governmentality, but also thinking neoliberal and neo-Ottoman rationalities in collaborative terms. Neo-Ottomanism is understood here as Turkey’s current form of neoconservatism, a prevalent political rationality that its governmental practices are not solely based on Islamic values, but also draws from and produces a new political culture that considers Ottoman-Islamic toleration and pluralism as the foundation of modern liberal multiculturalism in Turkey. Neoliberalism, in the same vein, far from a totalizing concept describing an established set of political ideology or economic policy, is conceived here as a historically and locally specific form of governmentality that must be analyzed by taking into account the multiple political forces which gave its unique shape in Turkey.[13] My claim is that when these two rationalities merge at the cultural domain, they engender a new art of government, which I call the government of culture and diversity.

This approach is therefore less concerned with a particular political ideology or the question of “how to govern,” but more about the “different styles of thought, their conditions of formation, the principles and knowledges that they borrow from and generate, the practices they consist of, how they are carried out, their contestations and alliances with other arts of governing.”[14] In light of this view, and for a practical purpose, Ottomentality is an alternative concept that I attempt to develop here to avoid the ambiguous meanings and analytical limitations of neo-Ottomanism. This concept underscores to the convergence of neoliberal and neo-Ottoman rationalities as well as the interrelated discourses, projects, policies, and strategies that are developed around them for regulating cultural activities and directing inter-ethnic and inter-religious relations in Turkey. It pays attention to the techniques and practices that have significant effects on the relationships of state, culture, and the social. It is concerned with the production of knowledge, or truth, based on which a new social reality of ‘freedom,’ ‘tolerance,’ and ‘multiculturalism’ in Turkey is constituted. Furthermore, it helps to identify the type of political subject, whose demand for cultural rights and participatory democracy is reduced to market terms and a narrow understanding of multiculturalism. And their criticism of this new social reality is increasingly subjected to judicial exclusion and discipline.

I shall note that Ottomentality is an authoritarian type of governmentality—a specific type of illiberal rule operated within the structure of modern liberal democracy. As Mitchell Dean notes, although the literature on governmentality has focused mainly on liberal democratic rules that are practiced through the individual subjects’ active role (as citizens) and exercise of freedom, there are also “non-liberal and explicitly authoritarian types of rule that seek to operate through obedient rather than free subjects, or, at a minimum, endeavor to neutralize any opposition to authority.”[15] He suggests that a useful way to approach to this type of governmentality would be to identify the practices and rationalities which “divide” or “exclude” those who are subjected to be governed.[16] According to Foucault’s notion of “dividing practices,” “[t]he subject is either divided inside himself or divided from others. This process objectivizes him. Examples are the mad and the sane, the sick and the healthy, the criminals and the ‘good boys’.”[17] Turkey’s growing neo-Ottoman cultural ensemble can be considered as such exclusionary practices, which seek to regulate the diversifying culture by dividing the subjects into categorical, if not polarized, segments based on their cultural differences. For instance, mundane practices such as going to the museums and watching television shows may produce subject positions which divide subjects into such categories as the pious and the secular, the moral and the degenerate, and the Sunni-Muslim-Turk and the ethno-religious minorities.

Reassessing the neo-Ottoman cultural ensemble through the lens of Ottomentality

In this final section, I propose a reassessment of the emergent neo-Ottoman cultural ensemble by looking beyond the conventional conceptions of neo-Ottomanism as “ideology” and “foreign policy.” Using the analytical concept of Ottomentality, I aim to examine the state’s changing role and governing rationality in culture, the discursive processes of knowledge production for rationalizing certain practices of government, and the techniques of constituting a particular type of citizenry who acts upon themselves in accordance with the established knowledge/truth. Nonetheless, before proceeding to an analysis of the government of culture and diversity, a brief overview of the larger context in which the AKP’s Ottomentality took shape would be helpful.

Context

Since the establishment of the Turkish republic, the state has played a major role in maintaining a homogeneous national identity by suppressing public claims of ethnic and religious differences through militaristic intervention. The state’s strict control of cultural life in society, in particular its assertive secularist approach to religion and ethnic conception of Turkish citizenship, has resulted in unsettling tensions between ethno-religious groups in the 1980s and 1990s, i.e. the Kurdish question and the 1997 “soft coup.” These social tensions indicated the limits of state-led modernization and secularization projects in accommodating ethnic and pious segments of society.[18] This was also a time when Turkey began to witness the declining authority of the founding ideology of Kemalism as an effect of economic and political liberalization. When the AKP came to power in 2002, one of the most urgent political questions was thus the “the limits of what the state can—or ought for its own good—reasonably demand of citizens […] to continue to make everyone internalize an ethnic conception of Turkishness.”[19] At this political juncture, it was clear that a more inclusive socio-political framework was necessary in order to mitigate the growing tension resulted in identity claims.

Apart from domestic affairs, a few vital transnational initiatives also took part in the AKP’s formulation of neoliberal and neo-Ottoman rationalities. First, in the aftermath of the attacks in New York on September 11 (9/11) in 2001, the Middle East and Muslim communities around the world became the target ofintensified political debates. In the midst of anti-Muslim and anti-terror propaganda, Turkey felt a need to rebuild its image by aligning with the United Nations’ (UN) resolution of “The Alliance of Civilizations,” which called for cross-cultural dialogue between countries through cultural exchange programs and transnational business partnership.[20] Turkey took on the leading role in this resolution and launched extensive developmental plans that were designated to rebuild Turkey’s image as a civilization of tolerance and peaceful co-existence.[21] The Ottoman-Islamic civilization, known for its legacy of cosmopolitanism and ethno-religious toleration, hence became an ideal trademark of Turkey for the project of “alliance of civilizations.”[22]

Second, Turkey’s accelerated EU negotiation between the late 1990s and mid 2000s provided a timely opportunity for the newly elected AKP government to launch “liberal-democratic reform,”[23] which would significantly transform the way culture was to be administered. Culture, among the prioritized areas of administrative reform, was now reorganized to comply with the EU integration plan. By incorporating the EU’s aspect of culture as a way of enhancing “freedom, democracy, solidarity and respect for diversity,”[24] the AKP-led national cultural policy would shift away from the state-centered, protectionist model of the Kemalist establishment towards one that highlights “principles of mutual tolerance, cultural variety, equality and opposition to discrimination.”[25]

Finally, the selection of Istanbul as 2010 European Capital of Culture (ECoC) is particularly worth noting as this event enabled local authorities to put into practice the neoliberal and neo-Ottoman governing rationalities through extensive urbanprojects and branding techniques. By sponsoring and showcasing different European cities each year, the ECoC program aims at promoting a multicultural European identity beyond national borders.[26] The 2010 Istanbul ECoC was an important opportunity for Turkey not only to promote its EU candidacy, but also for the local governments to pursue urban developmental projects.[27] Some of the newly formed Ottoman-themed cultural sites and productions were a part of the ECoC projects for branding Istanbul as cultural hub where the East and West meet. It is in this context that the interplay between the neoliberal and neo-Ottoman rationalities can be vividly observed in the form of neo-Ottoman cultural ensemble.

Strong state, culture, and the social

Given the contextual background mentioned above, one could argue that the AKP’s neoliberal and neo-Ottoman rationalities arose as critiques of the republican state’s excessive intervention in society’s cultural life. The transnational initiatives that required Turkey to adopt a liberal democratic paradigm have therefore given way to the formulation and convergence of these two forms of governmentalities that would significantly challenge the state-centered approach to culture as a means of governing the social. However, it would be inaccurate to claim that the AKP’s prioritization of private initiatives in cultural governance has effectively decentralized or democratized the cultural domain from the state’s authoritarian intervention and narrow definition of Turkish culture. Deregulation of culture entails sophisticated legislations concerning the roles of the state and civil society in cultural governance. Hence, for instance, the law of promotion of culture, the law of media censorship, and the new national cultural policy prepared by the Ministry of Culture and Tourism explicitly indicate not only a new vision of national culture, but also the roles of the state and civil society in promoting and preserving national culture. It shall be noted that culture as a governing technology is not an invention of the AKP government. Culture has always been a major area of administrative concern throughout the history of the Turkish republic. As Murat Katoğlu illustrates, during the early republic, culture was conceptualized as part of the state-led “public service” aimed to inform and educate the citizens.[28] Arts and culture were essential means for modernizing the nation; for instance,the state-run cultural institutions, i.e. state ballet, theater, museum, radio and television, “[indicate] the type of modern life style that the government was trying to advocate.”[29] Nonetheless, the role of the state, the status of culture, and the techniques of managing it have been transformed as Turkey undergoes neoliberal reform. In addition, Aksoy suggests that what distinguishes the AKP’s neoliberal mode of cultural governance from that of the early republic modernization project is that market mentality has become the administrative norm.[30] Culture now is reconceptualized as an asset for advancing Turkey in global competition and a site for exercising individual freedom rather than a mechanism of social engineering. And Turkey’s heritage of Ottoman-Islamic civilization in particular is utilized as a nation branding technique to enhance Turkey’s economy, rather than a corrupt past to be forgotten. To achieve the aim of efficient, hence good, governance, the AKP’s cultural governance has heavily relied on privatization as a means to limit state intervention. Thus, privatization has not only transformed culture into an integral part of the free market, but also redefined the state’s role as a facilitator of the culture market, rather than the main provider of cultural service to the public.

The state’s withdrawal from cultural service and prioritization of the civil society to take on the initiatives of preserving and promoting Turkish “cultural values and traditional arts”[31] lead to an immediate effect of the declining authority of the Kemalist cultural establishment. Since many of the previously state-run cultural institutions now are managed with corporate mentality, they begin to lose their status as state-centered institutions and significance in defining and maintaining a homogeneous Turkish culture that they once did. Instead, these institutions, together with other newly formed cultural sites and productions by private initiatives, are converted into a market place or cultural commodities in competition with each other. Hence, privatization of culture leads to the following consequences: First, it weakens and hollows out the 20th century notion of modern secular nation state, which sets a clear boundary confining religion within the private sphere. Second, it gives way to the neoconservative force, who “models state authority on [religious] authority, a pastoral relation of the state to its flock, and a concern with unified rather than balanced or checked state power.”[32] Finally, it converts social issues that are resulted from political actions into market terms and a sheer matter of culture, which is now left to personal choice.[33] As a result, far from a declining state, Ottomentality has constituted a strong state. In particular, neoliberal governance of the cultural field has enabled the ruling neoconservative government to mobilize a new set of political truth and norms for directing inter-ethnic and inter-religious relations in society.

New regime of truth

Central to Foucault’s notion of governmentality is “truth games”[34]—referring to the activities of knowledge production through which particular thoughts are rendered truthful and practices of government are made reasonable.[35] What Foucault calls the “regime of truth” is not concerned about facticity, but a coherent set of practices that connect different discourses and make sense of the political rationalities marking the “division between true and false.”[36] The neo-Ottoman cultural ensemble is a compelling case through which the AKP’s investment of thought, knowledge production, and truth telling can be observed. Two cases are particularly worth mentioning here as I work through the politics of truth in the AKP’s neoliberal governance of culture and neo-Ottoman management of diversity.

Between 2011 and 2014, the Turkish television historical drama Magnificent Century (Muhteşem Yüzyıl, Muhteşem hereafter), featuring the life of the Ottoman Sultan Süleyman, who is known for his legislative establishment in the 16th century Ottoman Empire, attracted wide viewership in Turkey and abroad, especially in the Balkans and Middle East. Although the show played a significant role in generating international interests in Turkey’s tourism, culinary, Ottoman-Islamicarts and history, etc. (which are the fundamental aims of the AKP-led national cultural policy to promote Turkey through arts and culture, including media export),[37] it received harsh criticism among some Ottoman(ist) historians and warning from the RTUK (Radio and Television Supreme Council, a key institution of media censorship and regulation in Turkey). The criticism included the show’s misrepresentation of the Sultan as a hedonist and its harm to moral and traditional values of society. Oktay Saral, an AKP deputy of Istanbul at the time, petitioned to the parliament for a law to ban the show. He said, “[The] law would […] show filmmakers [media practitioners] how to conduct their work in compliance with Turkish family structure and moral values without humiliating Turkish youth and children.”[38] Recep Tayyip Erdoğan (Prime Minister then) also stated, “[those] who toy with these [traditional] values would be taught a lesson within the premises of law.”[39] After his statement, the show was removed from in-flight-channels of national flag carrier Turkish Airlines.

Another popular media production, the 2012 blockbuster The Conquest 1453 (Fetih 1453, Fetih hereafter), which was acclaimed for its success in domestic and international box offices, also generated mixed receptions among Turkish and foreign audiences. Some critics in Turkey and European Christians criticized the film for its selective interpretation of the Ottoman conquest of Constantinople and offensive portrayal of the (Byzantine) Christians. The Greek weekly To Proto Thema denounced that the film served as a “conquest propaganda by the Turks” and “[failed] to show the mass killings of Greeks and the plunder of the land by the Turks.”[40] A Turkish critic also commented that the film portrays the “extreme patriotism” in Turkey “without any hint of […] tolerance sprinkled throughout [the film].”[41] Furthermore, a German Christian association campaigned to boycott the film. Meanwhile, the AKP officials on the contrary praised the film for its genuine representation of the conquest. As Bülent Arınç (Deputy Prime Minister then) stated, “This is truly the best film ever made in the past years.”[42] He also responded to the questions regarding the film’s historical accuracy, “This is a film, not a documentary. The film in general fairly represents all the events that occurred during the conquest as the way we know it.”[43]

When Muhteşem and Fetih are examined within the larger context in which the neo-Ottoman cultural ensemble is formed, the connections between particular types of knowledge and governmental practice become apparent. First, the cases of Muhteşem and Fetih reveal the saturation of market rationality as the basis for a new model of cultural governance. When culture is administered in market terms, it becomes a commodity for sale and promotion as well as an indicator of a number of things for measuring the performance of cultural governance. When Turkey’s culture, in particular Ottoman-Islamic cultural heritage, is converted into an asset and national brand to advance the country in global competition, the reputation and capital it generates become indicators of Turkey’s economic development and progress. The overt emphasis on economic growth, according to Irving Kristol, is one of the distinctive features that differentiate the neoconservatives from their conservative predecessors. He suggests that, for the neoconservatives, economic growth is what gives “modern democracies their legitimacy and durability.”[44] In the Turkish context, the rising neoconservative power, which consisted of a group of Islamists and secular, liberal intellectuals and entrepreneurs (at least in the early years of the AKP’s rule), had consistently focused on boosting Turkey’s economy. For them, economic development seems to have become the appropriate way of making “conservative politics suitable to governing a modern democracy.”[45] Henceforth, such high profile cultural productions as Muhteşem and Fetih are of valuable assets that serve the primary aim of the AKP-led cultural policy because they contribute to the growth in the related areas of tourism and culture industry by promoting Turkey at international level. Based on market rationality, as long as culture can generate productivity and profit, the government is doing a splendid job in governance. In other words, when neoliberal and neoconservative forces converge at the cultural domain, both culture and good governance are reduced to and measured by economic growth, which has become a synonym for democracy “equated with the existence of formal rights, especially private property rights; with the market; and with voting,” rather than political autonomy.[46]

Second, the AKP officials’ applause of Fetih on the one hand and criticism of Muhteşem on the other demonstrates their assertion of the moral-religious authority of the state. As the notion of nation state sovereignty has become weakened by the processes of economic liberalization and globalization, the boundary that separates religion and state has become blurred. As a result, religion becomes “de-privatized” and surges back into the public sphere.[47] This blurred boundary between religion and state has enabled the neoconservative AKP to establish links between religious authority and state authority as well as between religious truth and political truth.[48] These links are evident in the AKP officials’ various public statements declaring the government’s moral mission of sanitizing Turkish culture in accordance with Islamic and traditional values. For instance, as Erdoğan once reacted to his secular opponent’s comment about his interference in politics with religious views, “we [AKP] will raise a generation that is conservative and democratic and embraces the values and historical principles of its nation.”[49] According to his view, despite Muhteşem’s contribution of generating growth in industries of culture and tourism, it became subjected to censorship and legal action because its content did not comply with the governing authority’s moral mission. The controversy of Muhteşem illustrates the rise of a religion-based political truth in Turkey, which sees Islam as the main reference for directing society’s moral conduct and individual lifestyle. Henceforth, by rewarding desirable actions (i.e. with sponsorship law and tax incentives)[50] and punishing undesirable ones (i.e. through censorship, media ban, and jail term for media practitioners’ misconduct), the AKP-led reform of the cultural field constitutes a new type of political culture and truth—one that is based on moral-religious views rather than rational reasoning.

Moreover, the AKP officials’ support for Fetih reveals its endeavor in a neo-Ottomanist knowledge, which regards the 1453 Ottoman conquest of Constantinople as the foundation of modern liberal multiculturalism in Turkey. This knowledge perceives Islam as the centripetal force for enhancing social cohesion by transcending differences between faith and ethnic groups. It rejects candid and critical interpretations of history and insists on a singular view of Ottoman-Islamic pluralism and a pragmatic understanding of the relationship between religion and state.[51] It does not require historical accuracy since religious truth is cast as historical and political truth. For instance, a consistent, singular narrative of the conquest can be observed in such productions and sites as the Panorama 1453 History Museum, television series Fatih, and TRT children’s program Çınar. This narrative begins with Prophet Muhammad’s prophecy, which he received from the almighty Allah, that Constantinople would be conquered by a great Ottoman soldier. When history is narrated from a religious point of view, it becomes indisputable as it would imply challenge to religious truth, hence Allah’s will. Nonetheless, the neo-Ottomanist knowledge conceives the conquest as not only an Ottoman victory in the past, but an incontestable living truth in Turkey’s present. As Nevzat Bayhan, former general manager of Culture Inc. in association with the Istanbul Metropolitan Municipality (İBB Kültür A.Ş.), stated at the opening ceremony of Istanbul’s Panorama 1453 History Museum,

The conquest [of Istanbul] is not about taking over the city… but to make the city livable… and its populace happy. Today, Istanbul continues to present to the world as a place where Armenians, Syriacs, Kurds… Muslims, Jews, and Christians peacefully live together.[52]

Bayhan’s statement illustrates the significance of the 1453 conquest in the neo-Ottomanist knowledge because it marks the foundation of a culture of tolerance, diversity, and peaceful coexistence in Turkey. While the neo-Ottomanist knowledge may conveniently serve the branding purpose in the post-9/11 and ECoC contexts, I maintain that it more significantly rationalizes the governmental practices in reshaping the cultural conduct and multicultural relations in Turkey. The knowledge also produces a political norm of indifference—one that is reluctant to recognize ethno-religious differences among populace, uncritical of the limits of Islam-based toleration and multiculturalism, and more seriously, indifferent about state-sanctioned discrimination and violence against the ethno-religious minorities.

Ottomentality and its subject

The AKP’s practices of the government of culture and diversity constitute what Foucault calls the “technologies of the self—ways in which human beings come to understand and act upon themselves within certain regimes of authority and knowledge, and by means of certain techniques directed to self-improvement.”[53] The AKP’s neoliberal and neo-Ottoman rationalities share a similar aim as they both seek to produce a new set of ethnical code of social conduct and transform Turkish society into a particular kind, which is economically liberal and culturally conservative. They deploy different means to direct the governed in certain ways as to achieve the desired outcome. According to Foucault, the neoliberal style of government is based on the premise that “individuals should conduct their lives as an enterprise [and] should become entrepreneurs of themselves.”[54] Central to this style of government is the production of freedom—referring to the practices that are employed to produce the necessary condition for the individuals to be free and take on responsibility of caring for themselves. For instance, Nikolas Rose suggests that consumption, a form of governing technology, is often deployed to provide the individuals with a variety of choice for exercising freedom and self-improvement. As such, the subject citizens are now “active,” or “consumer” citizens, who understand their relationship with the others and conduct their life based on market mentality.[55] Unlike the republican citizens, whose rights, duties, and obligations areprimarily bond to the state, citizens as consumers “[are] to enact [their] democratic obligations as a form of consumption”[56] in the private sphere of the market.

The AKP’s neoliberal governance of culture hence has invested in liberalizing the cultural field by transforming it into a marketplace in order to create such a condition wherein citizens can enact their right to freedom and act upon themselves as a form of investment. The proliferation of the neo-Ottoman cultural ensemble in this regard can be understood as a new technology of the self as it creates a whole new field for the consumer citizens to exercise their freedom of choice (of identity, taste, and lifestyle) by providing them a variety of trendy Ottoman-themed cultural products, ranging from fashion to entertainment. This ensemble also constitutes a whole new imagery of the Ottoman legacy with which the consumer citizens may identify. Therefore, through participation within the cultural field, as artists, media practitioners, intellectuals, sponsors, or consumers, citizens are encouraged to think of themselves as free agents and their actions are a means for acquiring the necessary cultural capital to become cultivated and competent actors in the competitive market. This new technology of the self also has transformed the republican notion of Turkish citizenship to one that is activated upon individuals’ freedom of choice through cultural consumption at the marketplace.

Furthermore, as market mechanisms enhance the promulgation of moral-religious values, the consumer citizens are also offered a choice of identity as virtuous citizens, who should conduct their life and their relationship with the others based on Islamic traditions and values. Again, the public debate over the portrayal of the revered Sultan Süleyman as a hedonist in Muhteşem and the legal actions against the television producer, are exemplary of the disciplinary techniques for shaping individuals’ behaviors in line with conservative values. While consumer citizens exercise their freedom through cultural consumption, they are also reminded of their responsibility to preserve traditional moral value, family structure, and gender relations. Those who deviate from the norm are subjected to public condemnation and punishment.

Finally, as the neo-Ottomanist cultural ensemble reproduces and mediates a neo-Ottomanist knowledge in such commodities as the film Fetih and Panorama 1453 History Museum, consumer citizens are exposed to a new set of symbolic meanings of Ottoman-Islamic toleration, pluralism, and peaceful coexistence, albeit through a view of the Ottoman past fixated on its magnificence rather than its monstrosity.[57] This knowledge sets the ethical code for private citizens to think of themselves in relation to the other ethno-religious groups based on a hierarchical social order, which subordinates minorities to the rule of Sunni Islamic government. When this imagery of magnificence serves as the central component in nation branding, such as to align Turkey with the civilization of peace and co-existence in the post 9/11 and ECoC contexts, it encourages citizens to take pride and identify with their Ottoman-Islamic heritage. As such, Turkey’s nation branding perhaps also can be considered as a noveltechnology of the self as it requires citizens, be it business sectors, historians, or filmmakers, to take on their active role in building an image of tolerant and multicultural Turkey through arts and culture. It is in this regard that I consider the neo-Ottoman rationality as a form of “indirect rule of diversity”[58] as it produces a citizenry, who actively participates in the reproduction of neo-Ottomanist historiography and continues to remain uncritical about the “dark legacy of the Ottoman past.”[59] Consequently, Ottomentality has produced a type of subject that is constantly subjected to dividing techniques “that will divide populations and exclude certain categories from the status of the autonomous and rational person.”[60]

2016-10-5-1475705338